Patents by Inventor Youssef Billawala

Youssef Billawala has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10810242
    Abstract: Systems, methods, and apparatuses are disclosed for adaptively generating a summary of web-based content based on an attribute of a mobile communication device having transmitted a request for the web-based content. By adaptively generating the summary based on an attribute of the mobile communication device such as an amount of visual space available or a number of characters permitted in the interface, a display of the web-based content may be controlled on the mobile communication device in a way that was not previously available. This enables control of displaying web-based content that has been adaptively generated to be displayed on limited display screens based on a learned attribute of the mobile communication device requesting the web-based content.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: October 20, 2020
    Assignee: Oath Inc.
    Inventors: Youssef Billawala, Yashar Mehdad, Dragomir Radev, Amanda Stent, Kapil Thadani
  • Publication number: 20190236086
    Abstract: Systems, methods, and apparatuses are disclosed for adaptively generating a summary of web-based content based on an attribute of a mobile communication device having transmitted a request for the web-based content. By adaptively generating the summary based on an attribute of the mobile communication device such as an amount of visual space available or a number of characters permitted in the interface, a display of the web-based content may be controlled on the mobile communication device in a way that was not previously available. This enables control of displaying web-based content that has been adaptively generated to be displayed on limited display screens based on a learned attribute of the mobile communication device requesting the web-based content.
    Type: Application
    Filed: April 8, 2019
    Publication date: August 1, 2019
    Inventors: Youssef Billawala, Yashar Mehdad, Dragomir Radev, Amanda Stent, Kapil Thadani
  • Patent number: 10255356
    Abstract: Systems, methods, and apparatuses are disclosed for adaptively generating a summary of web-based content based on an attribute of a mobile communication device having transmitted a request for the web-based content. By adaptively generating the summary based on an attribute of the mobile communication device such as an amount of visual space available or a number of characters permitted in the interface, a display of the web-based content may be controlled on the mobile communication device in a way that was not previously available. This enables control of displaying web-based content that has been adaptively generated to be displayed on limited display screens based on a learned attribute of the mobile communication device requesting the web-based content.
    Type: Grant
    Filed: August 6, 2018
    Date of Patent: April 9, 2019
    Assignee: Oath Inc.
    Inventors: Youssef Billawala, Yashar Mehdad, Dragomir Radev, Amanda Stent, Kapil Thadani
  • Publication number: 20180349490
    Abstract: Systems, methods, and apparatuses are disclosed for adaptively generating a summary of web-based content based on an attribute of a mobile communication device having transmitted a request for the web-based content. By adaptively generating the summary based on an attribute of the mobile communication device such as an amount of visual space available or a number of characters permitted in the interface, a display of the web-based content may be controlled on the mobile communication device in a way that was not previously available. This enables control of displaying web-based content that has been adaptively generated to be displayed on limited display screens based on a learned attribute of the mobile communication device requesting the web-based content.
    Type: Application
    Filed: August 6, 2018
    Publication date: December 6, 2018
    Inventors: Youssef Billawala, Yashar Mehdad, Dragomir Radev, Amanda Stent, Kapil Thadani
  • Patent number: 9436759
    Abstract: The performance of traditional speech recognition systems (as applied to information extraction or translation) decreases significantly with, larger domain size, scarce training data as well as under noisy environmental conditions. This invention mitigates these problems through the introduction of a novel predictive feature extraction method which combines linguistic and statistical information for representation of information embedded in a noisy source language. The predictive features are combined with text classifiers to map the noisy text to one of the semantically or functionally similar groups. The features used by the classifier can be syntactic, semantic, and statistical.
    Type: Grant
    Filed: November 12, 2013
    Date of Patent: September 6, 2016
    Assignee: Nant Holdings IP, LLC
    Inventors: Jun Huang, Yookyung Kim, Youssef Billawala, Farzad Ehsani, Demitrios Master
  • Publication number: 20150134336
    Abstract: The performance of traditional speech recognition systems (as applied to information extraction or translation) decreases significantly with, larger domain size, scarce training data as well as under noisy environmental conditions. This invention mitigates these problems through the introduction of a novel predictive feature extraction method which combines linguistic and statistical information for representation of information embedded in a noisy source language. The predictive features are combined with text classifiers to map the noisy text to one of the semantically or functionally similar groups. The features used by the classifier can be syntactic, semantic, and statistical.
    Type: Application
    Filed: November 12, 2013
    Publication date: May 14, 2015
    Inventors: Jun Huang, Yookyung Kim, Youssef Billawala, Farzad Ehsani, Demitrios Master
  • Patent number: 8583416
    Abstract: The performance of traditional speech recognition systems (as applied to information extraction or translation) decreases significantly with, larger domain size, scarce training data as well as under noisy environmental conditions. This invention mitigates these problems through the introduction of a novel predictive feature extraction method which combines linguistic and statistical information for representation of information embedded in a noisy source language. The predictive features are combined with text classifiers to map the noisy text to one of the semantically or functionally similar groups. The features used by the classifier can be syntactic, semantic, and statistical.
    Type: Grant
    Filed: December 27, 2007
    Date of Patent: November 12, 2013
    Assignee: Fluential, LLC
    Inventors: Jun Huang, Yookyung Kim, Youssef Billawala, Farzad Ehsani, Demitrios Master
  • Patent number: 8504567
    Abstract: An information retrieval system and computer-based method provide constructing a title for a search result summary of a document through title synthesis, wherein the title is suitable for use in assessing the relevance of the summarized document to a query. Meaningful keywords or key phrases (title components) about the document are Obtained. The title components are classified into pre-established title component classes. When a query is input to which the document is relevant, a title for the document is constructed by arranging title components selected from title component classes to maximize a title utility function. The title utility function may be a query-dependent grade. In addition to the query, the title utility function may also account for constraints under which the title is to be presented to a user.
    Type: Grant
    Filed: August 23, 2010
    Date of Patent: August 6, 2013
    Assignee: Yahoo! Inc.
    Inventors: Youssef Billawala, Sudarshan Lamkhede
  • Publication number: 20120047131
    Abstract: An information retrieval system and computer-based method provide constructing a title for a search result summary of a document through title synthesis, wherein the title is suitable for use in assessing the relevance of the summarized document to a query. In one embodiment, the system obtains meaningful keywords or key phrases (title components) about the document; and classifies each title components into one or more of a plurality of pre-established title component classes. The title components may be automatically obtained for the document from available sources either before or at the time the document is made available for indexing by the system. When a query is input to the system to which the document is relevant, the system constructs a title for the document by arranging title components selected from title component classes, to maximize a title utility function. The title utility function may be a query-dependent grade.
    Type: Application
    Filed: August 23, 2010
    Publication date: February 23, 2012
    Inventors: Youssef Billawala, Sudarshan Lamkhede
  • Patent number: 7958109
    Abstract: Techniques for providing useful information to a user in response to a search query are provided. Based on the search query, one or more potential intents of the user are identified and a plurality of matching resources are identified. For at least one matching resource, a particular abstract template is selected based on the one or more potential intents. Each abstract (a) corresponds to a different intent than any other intent to which any other abstract template of the plurality of abstract templates corresponds, and (b) dictates a different manner of displaying information about a matching resource than any other manner of displaying dictated by any other abstract template of the plurality of abstract templates. A search results page is generated and sent to the user. The search results page includes an abstract for the at least one matching resource. The abstract is displayed based on the particular abstract template.
    Type: Grant
    Filed: February 6, 2009
    Date of Patent: June 7, 2011
    Assignee: Yahoo! Inc.
    Inventors: Yi-An Lin, Youssef Billawala, Kevin Haas, Jan Pfeifer
  • Publication number: 20090171662
    Abstract: The performance of traditional speech recognition systems (as applied to information extraction or translation) decreases significantly with, larger domain size, scarce training data as well as under noisy environmental conditions. This invention mitigates these problems through the introduction of a novel predictive feature extraction method which combines linguistic and statistical information for representation of information embedded in a noisy source language. The predictive features are combined with text classifiers to map the noisy text to one of the semantically or functionally similar groups. The features used by the classifier can be syntactic, semantic, and statistical.
    Type: Application
    Filed: December 27, 2007
    Publication date: July 2, 2009
    Applicant: SEHDA, INC.
    Inventors: Jun Huang, Yookyung Kim, Youssef Billawala, Farzad Ehsani, Demitrios Master
  • Publication number: 20080154577
    Abstract: Traditional statistical machine translation systems learn all information from a sentence aligned parallel text and are known to have problems translating between structurally diverse languages. To overcome this limitation, the present invention introduces two-level training, which incorporates syntactic chunking into statistical translation. A chunk-alignment step is inserted between the sentence-level and word-level training, which allows differing training for these two sources of information in order to learn lexical properties from the aligned chunks and learn structural properties from chunk sequences. The system consists of a linguistic processing step, two level training, and a decoding step which combines chunk translations of multiple sources and multiple language models.
    Type: Application
    Filed: December 26, 2006
    Publication date: June 26, 2008
    Inventors: Yookyung Kim, Jun Huang, Youssef Billawala
  • Publication number: 20080133245
    Abstract: The present invention disclose modular speech-to-speech translation systems and methods that provide adaptable platforms to enable verbal communication between speakers of different languages within the context of specific domains. The components of the preferred embodiments of the present invention includes: (1) speech recognition; (2) machine translation; (3) N-best merging module; (4) verification; and (5) text-to-speech. Characteristics of the speech recognition module here are that the modules are structured to provide N-best selections and multi-stream processing, where multiple speech recognition engines may be active at any one time. The N-best lists from the one or more speech recognition engines may be handled either separately or collectively to improve both recognition and translation results. A merge module is responsible for integrating the N-best outputs of the translation engines along with confidence/translation scores to create a ranked list or recognition-translation pairs.
    Type: Application
    Filed: December 4, 2006
    Publication date: June 5, 2008
    Inventors: Guillaume Proulx, Youssef Billawala, Elaine Drom, Farzad Ehsani, Yookyung Kim, Demitrios Master