Patents by Inventor Patrick Haffner

Patrick Haffner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9208778
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for combining frame and segment level processing, via temporal pooling, for phonetic classification. A frame processor unit receives an input and extracts the time-dependent features from the input. A plurality of pooling interface units generates a plurality of feature vectors based on pooling the time-dependent features and selecting a plurality of time-dependent features according to a plurality of selection strategies. Next, a plurality of segmental classification units generates scores for the feature vectors. Each segmental classification unit (SCU) can be dedicated to a specific pooling interface unit (PIU) to form a PIU-SCU combination. Multiple PIU-SCU combinations can be further combined to form an ensemble of combinations, and the ensemble can be diversified by varying the pooling operations used by the PIU-SCU combinations.
    Type: Grant
    Filed: November 10, 2014
    Date of Patent: December 8, 2015
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Sumit Chopra, Dimitrios Dimitriadis, Patrick Haffner
  • Publication number: 20150081302
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for performing speaker verification. A system configured to practice the method receives a request to verify a speaker, generates a text challenge that is unique to the request, and, in response to the request, prompts the speaker to utter the text challenge. Then the system records a dynamic image feature of the speaker as the speaker utters the text challenge, and performs speaker verification based on the dynamic image feature and the text challenge. Recording the dynamic image feature of the speaker can include recording video of the speaker while speaking the text challenge. The dynamic feature can include a movement pattern of head, lips, mouth, eyes, and/or eyebrows of the speaker. The dynamic image feature can relate to phonetic content of the speaker speaking the challenge, speech prosody, and the speaker's facial expression responding to content of the challenge.
    Type: Application
    Filed: November 24, 2014
    Publication date: March 19, 2015
    Inventors: Ann K. SYRDAL, Sumit CHOPRA, Patrick Haffner, Taniya MISHRA, Ilija ZELJKOVIC, Eric Zavesky
  • Publication number: 20150058012
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for combining frame and segment level processing, via temporal pooling, for phonetic classification. A frame processor unit receives an input and extracts the time-dependent features from the input. A plurality of pooling interface units generates a plurality of feature vectors based on pooling the time-dependent features and selecting a plurality of time-dependent features according to a plurality of selection strategies. Next, a plurality of segmental classification units generates scores for the feature vectors. Each segmental classification unit (SCU) can be dedicated to a specific pooling interface unit (PIU) to form a PIU-SCU combination. Multiple PIU-SCU combinations can be further combined to form an ensemble of combinations, and the ensemble can be diversified by varying the pooling operations used by the PIU-SCU combinations.
    Type: Application
    Filed: November 10, 2014
    Publication date: February 26, 2015
    Inventors: Sumit CHOPRA, Dimitrios DIMITRIADIS, Patrick HAFFNER
  • Patent number: 8935188
    Abstract: In one embodiment, the present disclosure is a method and apparatus for classifying applications using the collective properties of network traffic. In one embodiment, a method for classifying traffic in a communication network includes receiving a traffic activity graph, the traffic activity graph comprising a plurality of nodes interconnected by a plurality of edges, where each of the nodes represents an endpoint associated with the communication network and each of the edges represents traffic between a corresponding pair of the nodes, generating an initial set of inferences as to an application class associated with each of the edges, based on at least one measured statistic related to at least one traffic flow in the communication network, and refining the initial set of inferences based on a spatial distribution of the traffic flows, to produce a final traffic activity graph.
    Type: Grant
    Filed: August 17, 2010
    Date of Patent: January 13, 2015
    Assignees: AT&T Intellectual Property I, L.P., Regents of the University of Minnesota
    Inventors: Nicholas Duffield, Patrick Haffner, Yu Jin, Subhabrata Sen, Zhi-Li Zhang
  • Publication number: 20140372404
    Abstract: A method and apparatus for using a classifier for processing a query are disclosed. For example, the method receives a query from a user, and processes the query to locate one or more documents in accordance with a search engine having a discriminative classifier, wherein the discriminative classifier is trained with a plurality of artificial query examples. The method then presents a result of the processing to the user.
    Type: Application
    Filed: August 4, 2014
    Publication date: December 18, 2014
    Inventors: ILIJA ZELJKOVIC, Srinivas Bangalore, Patrick Haffner, Jay Wilpon
  • Publication number: 20140358537
    Abstract: Disclosed herein are systems, methods and non-transitory computer-readable media for performing speech recognition across different applications or environments without model customization or prior knowledge of the domain of the received speech. The disclosure includes recognizing received speech with a collection of domain-specific speech recognizers, determining a speech recognition confidence for each of the speech recognition outputs, selecting speech recognition candidates based on a respective speech recognition confidence for each speech recognition output, and combining selected speech recognition candidates to generate text based on the combination.
    Type: Application
    Filed: August 14, 2014
    Publication date: December 4, 2014
    Inventors: Mazin GILBERT, Srinivas BANGALORE, Patrick HAFFNER, Robert BELL
  • Patent number: 8897500
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for performing speaker verification. A system configured to practice the method receives a request to verify a speaker, generates a text challenge that is unique to the request, and, in response to the request, prompts the speaker to utter the text challenge. Then the system records a dynamic image feature of the speaker as the speaker utters the text challenge, and performs speaker verification based on the dynamic image feature and the text challenge. Recording the dynamic image feature of the speaker can include recording video of the speaker while speaking the text challenge. The dynamic feature can include a movement pattern of head, lips, mouth, eyes, and/or eyebrows of the speaker. The dynamic image feature can relate to phonetic content of the speaker speaking the challenge, speech prosody, and the speaker's facial expression responding to content of the challenge.
    Type: Grant
    Filed: May 5, 2011
    Date of Patent: November 25, 2014
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Ann K. Syrdal, Sumit Chopra, Patrick Haffner, Taniya Mishra, Ilija Zeljkovic, Eric Zavesky
  • Patent number: 8886533
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for combining frame and segment level processing, via temporal pooling, for phonetic classification. A frame processor unit receives an input and extracts the time-dependent features from the input. A plurality of pooling interface units generates a plurality of feature vectors based on pooling the time-dependent features and selecting a plurality of time-dependent features according to a plurality of selection strategies. Next, a plurality of segmental classification units generates scores for the feature vectors. Each segmental classification unit (SCU) can be dedicated to a specific pooling interface unit (PIU) to form a PIU-SCU combination. Multiple PIU-SCU combinations can be further combined to form an ensemble of combinations, and the ensemble can be diversified by varying the pooling operations used by the PIU-SCU combinations.
    Type: Grant
    Filed: October 25, 2011
    Date of Patent: November 11, 2014
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Sumit Chopra, Dimitrios Dimitriadis, Patrick Haffner
  • Publication number: 20140330552
    Abstract: Disclosed are systems, methods, and computer-readable media for performing translations from a source language to a target language. The method comprises receiving a source phrase, generating a target bag of words based on a global lexical selection of words that loosely couples the source words/phrases and target words/phrases, and reconstructing a target phrase or sentence by considering all permutations of words with a conditional probability greater than a threshold.
    Type: Application
    Filed: July 21, 2014
    Publication date: November 6, 2014
    Inventors: Srinivas BANGALORE, Patrick HAFFNER, Stephan KANTHAK
  • Patent number: 8880614
    Abstract: A method and apparatus for providing protection for mail servers in networks such as the packet networks are disclosed. For example, the present method detects a mail server is reaching its processing limit. The method then selectively limits connections to the mail server from a plurality of source nodes based on a spam index associated with each of the source nodes.
    Type: Grant
    Filed: November 13, 2006
    Date of Patent: November 4, 2014
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Albert G. Greenberg, Patrick Haffner, Subhabrata Sen, Oliver Spatscheck
  • Patent number: 8812321
    Abstract: Disclosed herein are systems, methods and non-transitory computer-readable media for performing speech recognition across different applications or environments without model customization or prior knowledge of the domain of the received speech. The disclosure includes recognizing received speech with a collection of domain-specific speech recognizers, determining a speech recognition confidence for each of the speech recognition outputs, selecting speech recognition candidates based on a respective speech recognition confidence for each speech recognition output, and combining selected speech recognition candidates to generate text based on the combination.
    Type: Grant
    Filed: September 30, 2010
    Date of Patent: August 19, 2014
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Mazin Gilbert, Srinivas Bangalore, Patrick Haffner, Robert Bell
  • Patent number: 8799279
    Abstract: A method and apparatus for using a classifier for processing a query are disclosed. For example, the method receives a query from a user, and processes the query to locate one or more documents in accordance with a search engine having a discriminative classifier, wherein the discriminative classifier is trained with a plurality of artificial query examples. The method then presents a result of the processing to the user.
    Type: Grant
    Filed: December 31, 2008
    Date of Patent: August 5, 2014
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Ilija Zeljkovic, Srinivas Bangalore, Patrick Haffner, Jay Wilpon
  • Patent number: 8788258
    Abstract: Disclosed are systems, methods, and computer-readable media for performing translations from a source language to a target language. The method comprises receiving a source phrase, generating a target bag of words based on a global lexical selection of words that loosely couples the source words/phrases and target words/phrases, and reconstructing a target phrase or sentence by considering all permutations of words with a conditional probability greater than a threshold.
    Type: Grant
    Filed: March 15, 2007
    Date of Patent: July 22, 2014
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Srinivas Bangalore, Patrick Haffner, Stephan Kanthak
  • Publication number: 20130103402
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for combining frame and segment level processing, via temporal pooling, for phonetic classification. A frame processor unit receives an input and extracts the time-dependent features from the input. A plurality of pooling interface units generates a plurality of feature vectors based on pooling the time-dependent features and selecting a plurality of time-dependent features according to a plurality of selection strategies. Next, a plurality of segmental classification units generates scores for the feature vectors. Each segmental classification unit (SCU) can be dedicated to a specific pooling interface unit (PIU) to form a PIU-SCU combination. Multiple PIU-SCU combinations can be further combined to form an ensemble of combinations, and the ensemble can be diversified by varying the pooling operations used by the PIU-SCU combinations.
    Type: Application
    Filed: October 25, 2011
    Publication date: April 25, 2013
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Sumit CHOPRA, Dimitrios Dimitriadis, Patrick Haffner
  • Publication number: 20130013542
    Abstract: A traffic classifier has a plurality of binary classifiers, each associated with one of a plurality of calibrators. Each calibrator trained to translate an output score of the associated binary classifier into an estimated class probability value using a fitted logistic curve, each estimated class probability value indicating a probability that the packet flow on which the output score is based belongs to the traffic class associated with the binary classifier associated with the calibrator. The classifier training system configured to generate a training data based on network information gained using flow and packet sampling methods. In some embodiments, the classifier training system configured to generate reduced training data sets, one for each traffic class, reducing the training data related to traffic not associated with the traffic class.
    Type: Application
    Filed: September 14, 2012
    Publication date: January 10, 2013
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: SUBHABRATA SEN, Nicholas duffield, Patrick Haffner, Jeffrey Erman, Yu Jin
  • Patent number: 8311956
    Abstract: A traffic classifier has a plurality of binary classifiers, each associated with one of a plurality of calibrators. Each calibrator trained to translate an output score of the associated binary classifier into an estimated class probability value using a fitted logistic curve, each estimated class probability value indicating a probability that the packet flow on which the output score is based belongs to the traffic class associated with the binary classifier associated with the calibrator. The classifier training system configured to generate a training data based on network information gained using flow and packet sampling methods. In some embodiments, the classifier training system configured to generate reduced training data sets, one for each traffic class, reducing the training data related to traffic not associated with the traffic class.
    Type: Grant
    Filed: August 11, 2009
    Date of Patent: November 13, 2012
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Subhabrata Sen, Nicholas Duffield, Patrick Haffner, Jeffrey Erman, Yu Jin
  • Publication number: 20120281885
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for performing speaker verification. A system configured to practice the method receives a request to verify a speaker, generates a text challenge that is unique to the request, and, in response to the request, prompts the speaker to utter the text challenge. Then the system records a dynamic image feature of the speaker as the speaker utters the text challenge, and performs speaker verification based on the dynamic image feature and the text challenge. Recording the dynamic image feature of the speaker can include recording video of the speaker while speaking the text challenge. The dynamic feature can include a movement pattern of head, lips, mouth, eyes, and/or eyebrows of the speaker. The dynamic image feature can relate to phonetic content of the speaker speaking the challenge, speech prosody, and the speaker's facial expression responding to content of the challenge.
    Type: Application
    Filed: May 5, 2011
    Publication date: November 8, 2012
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Ann K. SYRDAL, Sumit Chopra, Patrick Haffner, Taniya Mishra, Ilija Zeljkovic, Eric Zavesky
  • Publication number: 20120253799
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for generating domain-specific speech recognition models for a domain of interest by combining and tuning existing speech recognition models when a speech recognizer does not have access to a speech recognition model for that domain of interest and when available domain-specific data is below a minimum desired threshold to create a new domain-specific speech recognition model. A system configured to practice the method identifies a speech recognition domain and combines a set of speech recognition models, each speech recognition model of the set of speech recognition models being from a respective speech recognition domain. The system receives an amount of data specific to the speech recognition domain, wherein the amount of data is less than a minimum threshold to create a new domain-specific model, and tunes the combined speech recognition model for the speech recognition domain based on the data.
    Type: Application
    Filed: March 28, 2011
    Publication date: October 4, 2012
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Srinivas BANGALORE, Robert Bell, Diamantino Antonio Caseiro, Mazin Gilbert, Patrick Haffner
  • Publication number: 20120084086
    Abstract: Disclosed herein are systems, methods and non-transitory computer-readable media for performing speech recognition across different applications or environments without model customization or prior knowledge of the domain of the received speech. The disclosure includes recognizing received speech with a collection of domain-specific speech recognizers, determining a speech recognition confidence for each of the speech recognition outputs, selecting speech recognition candidates based on a respective speech recognition confidence for each speech recognition output, and combining selected speech recognition candidates to generate text based on the combination.
    Type: Application
    Filed: September 30, 2010
    Publication date: April 5, 2012
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Mazin GILBERT, Srinivas Bangalore, Patrick Haffner, Robert Bell
  • Publication number: 20120047096
    Abstract: In one embodiment, the present disclosure is a method and apparatus for classifying applications using the collective properties of network traffic. In one embodiment, a method for classifying traffic in a communication network includes receiving a traffic activity graph, the traffic activity graph comprising a plurality of nodes interconnected by a plurality of edges, where each of the nodes represents an endpoint associated with the communication network and each of the edges represents traffic between a corresponding pair of the nodes, generating an initial set of inferences as to an application class associated with each of the edges, based on at least one measured statistic related to at least one traffic flow in the communication network, and refining the initial set of inferences based on a spatial distribution of the traffic flows, to produce a final traffic activity graph.
    Type: Application
    Filed: August 17, 2010
    Publication date: February 23, 2012
    Inventors: Nicholas Duffield, Patrick Haffner, Yu Jin, Subhabrata Sen, Zhi-Li Zhang