Patents by Inventor Raffaele PEREGO
Raffaele PEREGO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240370446Abstract: A method and system are described for improving the speed and efficiency of obtaining conversational search results. A user may speak a phrase to perform a conversational search or a series of phrases to perform a series of searches. These spoken phrases may be enriched by context and then converted into a query embedding. A similarity between the query embedding and document embeddings is used to determine the search results including a query cutoff number of documents and a cache cutoff number of documents. A second search phrase may use the cache of documents along with comparisons of the returned documents and the first query embedding to determine the quality of the cache for responding to the second search query. If the results are high-quality then the search may proceed much more rapidly by applying the second query only to the cached documents rather than to the server.Type: ApplicationFiled: July 11, 2024Publication date: November 7, 2024Inventors: Ophir Frieder, Ida Mele, Christina-Ioana Muntean, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto
-
Patent number: 12067021Abstract: A method and system are described for improving the speed and efficiency of obtaining conversational search results. A user may speak a phrase to perform a conversational search or a series of phrases to perform a series of searches. These spoken phrases may be enriched by context and then converted into a query embedding. A similarity between the query embedding and document embeddings is used to determine the search results including a query cutoff number of documents and a cache cutoff number of documents. A second search phrase may use the cache of documents along with comparisons of the returned documents and the first query embedding to determine the quality of the cache for responding to the second search query. If the results are high-quality then the search may proceed much more rapidly by applying the second query only to the cached documents rather than to the server.Type: GrantFiled: February 21, 2023Date of Patent: August 20, 2024Assignee: Georgetown UniversityInventors: Ophir Frieder, Ida Mele, Christina-Ioana Muntean, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto
-
Publication number: 20230267126Abstract: A method and system are described for improving the speed and efficiency of obtaining conversational search results. A user may speak a phrase to perform a conversational search or a series of phrases to perform a series of searches. These spoken phrases may be enriched by context and then converted into a query embedding. A similarity between the query embedding and document embeddings is used to determine the search results including a query cutoff number of documents and a cache cutoff number of documents. A second search phrase may use the cache of documents along with comparisons of the returned documents and the first query embedding to determine the quality of the cache for responding to the second search query. If the results are high-quality then the search may proceed much more rapidly by applying the second query only to the cached documents rather than to the server.Type: ApplicationFiled: February 21, 2023Publication date: August 24, 2023Inventors: Ophir Frieder, Ida Mele, Christina-Ioana Muntean, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto
-
Patent number: 11693885Abstract: Embodiments may provide a cache for query results that can adapt the cache-space utilization to the popularity of the various topics represented in the query stream. For example, a method for query processing may perform receiving a plurality of queries for data and requesting data responsive to at least one query from a data cache comprising a temporal cache, wherein the temporal cache is configured to store data based on a topic associated with the data and is configured to retrieve data based on a topic, and wherein the data cache is configured to retrieve data responsive to at least one query from the computer system.Type: GrantFiled: July 21, 2021Date of Patent: July 4, 2023Assignee: Georgetown UniversityInventors: Ophir Frieder, Ida Mele, Raffaele Perego, Nicola Tonellotto
-
Publication number: 20210357434Abstract: Embodiments may provide a cache for query results that can adapt the cache-space utilization to the popularity of the various topics represented in the query stream. For example, a method for query processing may perform receiving a plurality of queries for data and requesting data responsive to at least one query from a data cache comprising a temporal cache, wherein the temporal cache is configured to store data based on a topic associated with the data and is configured to retrieve data based on a topic, and wherein the data cache is configured to retrieve data responsive to at least one query from the computer system.Type: ApplicationFiled: July 21, 2021Publication date: November 18, 2021Applicant: Georgetown UniversityInventors: Ophir Frieder, Ida Mele, Raffaele Perego, Nicola Tonellotto
-
Patent number: 11151167Abstract: Embodiments may provide a cache for query results that can adapt the cache-space utilization to the popularity of the various topics represented in the query stream. For example, a method for query processing may perform receiving a plurality of queries for data, determining at least one topic associated with each query, and requesting data responsive to each query from a data cache comprising a plurality of partitions, including at least a static cache partition, a dynamic cache partition, and a temporal cache partition, the temporal cache partition may store data based on a topic associated with the data, and may be further partitioned into a plurality of topic portions, each portion may store data relating to an associated topic, wherein the associated topic may be selected from among determined topics of queries received by the computer system, and the data cache may retrieve data for the queries from the computer system.Type: GrantFiled: October 16, 2019Date of Patent: October 19, 2021Assignee: Georgetown UniversityInventors: Ophir Frieder, Ida Mele, Raffaele Perego, Nicola Tonellotto
-
Patent number: 11106685Abstract: The present invention concerns a novel method to efficiently score documents (texts, images, audios, videos, and any other information file) by using a machine-learned ranking function modeled by an additive ensemble of regression trees. A main contribution is a new representation of the tree ensemble based on bitvectors, where the tree traversal, aimed to detect the leaves that contribute to the final scoring of a document, is performed through efficient logical bitwise operations. In addition, the traversal is not performed one tree after another, as one would expect, but it is interleaved, feature by feature, over the whole tree ensemble. Tests conducted on publicly available LtR datasets confirm unprecedented speedups (up to 6.5×) over the best state-of-the-art methods.Type: GrantFiled: June 17, 2015Date of Patent: August 31, 2021Assignee: Istella S.p.A.Inventors: Domenico Dato, Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Nicola Tonellotto, Rossano Venturini
-
Publication number: 20200356578Abstract: Embodiments may provide a cache for query results that can adapt the cache-space utilization to the popularity of the various topics represented in the query stream. For example, a method for query processing may perform receiving a plurality of queries for data, determining at least one topic associated with each query, and requesting data responsive to each query from a data cache comprising a plurality of partitions, including at least a static cache partition, a dynamic cache partition, and a temporal cache partition, the temporal cache partition may store data based on a topic associated with the data, and may be further partitioned into a plurality of topic portions, each portion may store data relating to an associated topic, wherein the associated topic may be selected from among determined topics of queries received by the computer system, and the data cache may retrieve data for the queries from the computer system.Type: ApplicationFiled: October 16, 2019Publication date: November 12, 2020Applicant: Georgetown UniversityInventors: Ophir Frieder, Ida Mele, Raffaele Perego, Nicola Tonellotto
-
Patent number: 10503792Abstract: Embodiments may provide a cache for query results that can adapt the cache-space utilization to the popularity of the various topics represented in the query stream. For example, a method for query processing may perform receiving a plurality of queries for data, determining at least one topic associated with each query, and requesting data responsive to each query from a data cache comprising a plurality of partitions, including at least a static cache partition, a dynamic cache partition, and a temporal cache partition, the temporal cache partition may store data based on a topic associated with the data, and may be further partitioned into a plurality of topic portions, each portion may store data relating to an associated topic, wherein the associated topic may be selected from among determined topics of queries received by the computer system, and the data cache may retrieve data for the queries from the computer system.Type: GrantFiled: May 10, 2019Date of Patent: December 10, 2019Assignee: Georgetown UniversityInventors: Ophir Frieder, Ida Mele, Raffaele Perego, Nicola Tonellotto
-
Publication number: 20180217991Abstract: The present invention concerns a novel method to efficiently score documents (texts, images, audios, videos, and any other information file) by using a machine-learned ranking function modeled by an additive ensemble of regression trees. A main contribution is a new representation of the tree ensemble based on bitvectors, where the tree traversal, aimed to detect the leaves that contribute to the final scoring of a document, is performed through efficient logical bitwise operations. In addition, the traversal is not performed one tree after another, as one would expect, but it is interleaved, feature by feature, over the whole tree ensemble. Tests conducted on publicly available LtR datasets confirm unprecedented speedups (up to 6.5×) over the best state-of-the-art methods.Type: ApplicationFiled: June 17, 2015Publication date: August 2, 2018Applicant: Istella S.p.A.Inventors: Domenico DATO, Claudio LUCCHESE, Franco Maria NARDINI, Salvatore ORLANDO, Raffaele PEREGO, Nicola TONELLOTTO, Rossano VENTURINI