Patents by Inventor Raffaele PEREGO

Raffaele PEREGO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240370446
    Abstract: A method and system are described for improving the speed and efficiency of obtaining conversational search results. A user may speak a phrase to perform a conversational search or a series of phrases to perform a series of searches. These spoken phrases may be enriched by context and then converted into a query embedding. A similarity between the query embedding and document embeddings is used to determine the search results including a query cutoff number of documents and a cache cutoff number of documents. A second search phrase may use the cache of documents along with comparisons of the returned documents and the first query embedding to determine the quality of the cache for responding to the second search query. If the results are high-quality then the search may proceed much more rapidly by applying the second query only to the cached documents rather than to the server.
    Type: Application
    Filed: July 11, 2024
    Publication date: November 7, 2024
    Inventors: Ophir Frieder, Ida Mele, Christina-Ioana Muntean, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto
  • Patent number: 12067021
    Abstract: A method and system are described for improving the speed and efficiency of obtaining conversational search results. A user may speak a phrase to perform a conversational search or a series of phrases to perform a series of searches. These spoken phrases may be enriched by context and then converted into a query embedding. A similarity between the query embedding and document embeddings is used to determine the search results including a query cutoff number of documents and a cache cutoff number of documents. A second search phrase may use the cache of documents along with comparisons of the returned documents and the first query embedding to determine the quality of the cache for responding to the second search query. If the results are high-quality then the search may proceed much more rapidly by applying the second query only to the cached documents rather than to the server.
    Type: Grant
    Filed: February 21, 2023
    Date of Patent: August 20, 2024
    Assignee: Georgetown University
    Inventors: Ophir Frieder, Ida Mele, Christina-Ioana Muntean, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto
  • Publication number: 20230267126
    Abstract: A method and system are described for improving the speed and efficiency of obtaining conversational search results. A user may speak a phrase to perform a conversational search or a series of phrases to perform a series of searches. These spoken phrases may be enriched by context and then converted into a query embedding. A similarity between the query embedding and document embeddings is used to determine the search results including a query cutoff number of documents and a cache cutoff number of documents. A second search phrase may use the cache of documents along with comparisons of the returned documents and the first query embedding to determine the quality of the cache for responding to the second search query. If the results are high-quality then the search may proceed much more rapidly by applying the second query only to the cached documents rather than to the server.
    Type: Application
    Filed: February 21, 2023
    Publication date: August 24, 2023
    Inventors: Ophir Frieder, Ida Mele, Christina-Ioana Muntean, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto
  • Patent number: 11693885
    Abstract: Embodiments may provide a cache for query results that can adapt the cache-space utilization to the popularity of the various topics represented in the query stream. For example, a method for query processing may perform receiving a plurality of queries for data and requesting data responsive to at least one query from a data cache comprising a temporal cache, wherein the temporal cache is configured to store data based on a topic associated with the data and is configured to retrieve data based on a topic, and wherein the data cache is configured to retrieve data responsive to at least one query from the computer system.
    Type: Grant
    Filed: July 21, 2021
    Date of Patent: July 4, 2023
    Assignee: Georgetown University
    Inventors: Ophir Frieder, Ida Mele, Raffaele Perego, Nicola Tonellotto
  • Publication number: 20210357434
    Abstract: Embodiments may provide a cache for query results that can adapt the cache-space utilization to the popularity of the various topics represented in the query stream. For example, a method for query processing may perform receiving a plurality of queries for data and requesting data responsive to at least one query from a data cache comprising a temporal cache, wherein the temporal cache is configured to store data based on a topic associated with the data and is configured to retrieve data based on a topic, and wherein the data cache is configured to retrieve data responsive to at least one query from the computer system.
    Type: Application
    Filed: July 21, 2021
    Publication date: November 18, 2021
    Applicant: Georgetown University
    Inventors: Ophir Frieder, Ida Mele, Raffaele Perego, Nicola Tonellotto
  • Patent number: 11151167
    Abstract: Embodiments may provide a cache for query results that can adapt the cache-space utilization to the popularity of the various topics represented in the query stream. For example, a method for query processing may perform receiving a plurality of queries for data, determining at least one topic associated with each query, and requesting data responsive to each query from a data cache comprising a plurality of partitions, including at least a static cache partition, a dynamic cache partition, and a temporal cache partition, the temporal cache partition may store data based on a topic associated with the data, and may be further partitioned into a plurality of topic portions, each portion may store data relating to an associated topic, wherein the associated topic may be selected from among determined topics of queries received by the computer system, and the data cache may retrieve data for the queries from the computer system.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: October 19, 2021
    Assignee: Georgetown University
    Inventors: Ophir Frieder, Ida Mele, Raffaele Perego, Nicola Tonellotto
  • Patent number: 11106685
    Abstract: The present invention concerns a novel method to efficiently score documents (texts, images, audios, videos, and any other information file) by using a machine-learned ranking function modeled by an additive ensemble of regression trees. A main contribution is a new representation of the tree ensemble based on bitvectors, where the tree traversal, aimed to detect the leaves that contribute to the final scoring of a document, is performed through efficient logical bitwise operations. In addition, the traversal is not performed one tree after another, as one would expect, but it is interleaved, feature by feature, over the whole tree ensemble. Tests conducted on publicly available LtR datasets confirm unprecedented speedups (up to 6.5×) over the best state-of-the-art methods.
    Type: Grant
    Filed: June 17, 2015
    Date of Patent: August 31, 2021
    Assignee: Istella S.p.A.
    Inventors: Domenico Dato, Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Nicola Tonellotto, Rossano Venturini
  • Publication number: 20200356578
    Abstract: Embodiments may provide a cache for query results that can adapt the cache-space utilization to the popularity of the various topics represented in the query stream. For example, a method for query processing may perform receiving a plurality of queries for data, determining at least one topic associated with each query, and requesting data responsive to each query from a data cache comprising a plurality of partitions, including at least a static cache partition, a dynamic cache partition, and a temporal cache partition, the temporal cache partition may store data based on a topic associated with the data, and may be further partitioned into a plurality of topic portions, each portion may store data relating to an associated topic, wherein the associated topic may be selected from among determined topics of queries received by the computer system, and the data cache may retrieve data for the queries from the computer system.
    Type: Application
    Filed: October 16, 2019
    Publication date: November 12, 2020
    Applicant: Georgetown University
    Inventors: Ophir Frieder, Ida Mele, Raffaele Perego, Nicola Tonellotto
  • Patent number: 10503792
    Abstract: Embodiments may provide a cache for query results that can adapt the cache-space utilization to the popularity of the various topics represented in the query stream. For example, a method for query processing may perform receiving a plurality of queries for data, determining at least one topic associated with each query, and requesting data responsive to each query from a data cache comprising a plurality of partitions, including at least a static cache partition, a dynamic cache partition, and a temporal cache partition, the temporal cache partition may store data based on a topic associated with the data, and may be further partitioned into a plurality of topic portions, each portion may store data relating to an associated topic, wherein the associated topic may be selected from among determined topics of queries received by the computer system, and the data cache may retrieve data for the queries from the computer system.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: December 10, 2019
    Assignee: Georgetown University
    Inventors: Ophir Frieder, Ida Mele, Raffaele Perego, Nicola Tonellotto
  • Publication number: 20180217991
    Abstract: The present invention concerns a novel method to efficiently score documents (texts, images, audios, videos, and any other information file) by using a machine-learned ranking function modeled by an additive ensemble of regression trees. A main contribution is a new representation of the tree ensemble based on bitvectors, where the tree traversal, aimed to detect the leaves that contribute to the final scoring of a document, is performed through efficient logical bitwise operations. In addition, the traversal is not performed one tree after another, as one would expect, but it is interleaved, feature by feature, over the whole tree ensemble. Tests conducted on publicly available LtR datasets confirm unprecedented speedups (up to 6.5×) over the best state-of-the-art methods.
    Type: Application
    Filed: June 17, 2015
    Publication date: August 2, 2018
    Applicant: Istella S.p.A.
    Inventors: Domenico DATO, Claudio LUCCHESE, Franco Maria NARDINI, Salvatore ORLANDO, Raffaele PEREGO, Nicola TONELLOTTO, Rossano VENTURINI