Patents by Inventor Hongqin Song

Hongqin Song has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11176556
    Abstract: Embodiments of the invention are directed to systems and methods for utilizing a cache to store historical transaction data. A predictive model may be trained to identify particular identifiers associated with historical data that is likely to be utilized on a particular date and/or within a particular time period. The historical data corresponding to these identifiers may be stored in a cache of the processing computer. Subsequently, an authorization request message may be received that includes an identifier. The processing computer may utilize the identifier to retrieve historical transaction data from the cache. The retrieved data may be utilized to perform any suitable operation. By predicting the data that will be needed to perform these operations, and preemptively store such data in a cache, the latency associated with subsequent processing may be reduced and the performance of the system as a whole improved.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: November 16, 2021
    Assignee: Visa International Service Association
    Inventors: Hongqin Song, Yu Gu, Dan Wang, Peter Walker
  • Publication number: 20210342848
    Abstract: Provided is a system, method, and apparatus for multi-staged risk scoring. The system includes at least one processor programmed or configured to receive a transaction request message including transaction data, generate a first risk score based at least partially on a first algorithm and a first set of data, determine if the first risk score satisfies a first threshold, in response to determining that the first risk score satisfies the first threshold, process the transaction, in response to determining that the first risk score does not satisfy the first threshold, generate a second risk score based at least partially on a second algorithm and a second set of data different than the first set of data, determine if the second risk score satisfies a second threshold, in response to determining that the second risk score satisfies the second threshold, process the transaction.
    Type: Application
    Filed: October 5, 2018
    Publication date: November 4, 2021
    Inventors: Hongqin Song, Yu Gu
  • Publication number: 20210326255
    Abstract: Cache memory requirements between normal and peak operation may vary by two orders of magnitude or more. A cache memory management system for multi-tenant computing environments monitors memory requests and uses a pattern matching classifier to generate patterns which are then delivered to a neural network. The neural network is trained to predict near-future cache memory performance based on the current memory access patterns. An optimizer allocates cache memory among the tenants to ensure that each tenant has sufficient memory to meet its required service levels while avoiding the need to provision the computing environment with worst-case scenario levels of cache memory. System resources are preserved while maintaining required performance levels.
    Type: Application
    Filed: July 16, 2018
    Publication date: October 21, 2021
    Inventors: Yu Gu, Hongqin Song
  • Publication number: 20210264453
    Abstract: Provided is a computer-implemented method for providing real-time offers based on geolocation and merchant category. Transaction data for transaction from a plurality of merchants is received. A subset of merchants is determined based on the physical location of the merchants and the merchant category of the merchants. Real-time market activity data is determined for each of the merchants in the subset of merchants. A real-time offer is initiated based on comparing the market activity data of at least one merchant compared to the market activity data of a first merchant.
    Type: Application
    Filed: February 26, 2020
    Publication date: August 26, 2021
    Inventors: William Joseph Leddy, III, Hongqin Song
  • Publication number: 20210073808
    Abstract: Provided is a method for aggregating data from real-time events (e.g., payment transactions). The method may include receiving event (e.g., transaction) data associated with a plurality of events (e.g., payment transactions). First aggregation of interest data associated with a type of aggregation of interest may be received. A first key associated with each event (e.g., transaction) may be determined based on a first portion of the event (e.g., transaction) data associated with each event (e.g., transaction) and the first aggregation of interest data. A first value based at least partially on a first plurality of the first keys associated with a first subset of the plurality of payment transactions may be communicated based on a first user request. A system and computer program product are also disclosed.
    Type: Application
    Filed: January 22, 2018
    Publication date: March 11, 2021
    Inventors: Yu Gu, Hongqin Song, Ankit Talati, Dirk Reinshagen, Zandro Luis Gonzalez
  • Publication number: 20210056574
    Abstract: A method, system, and computer program product for predicting future transactions may obtain merchant data associated with a merchant; determine a geographic location associated with the merchant based on the merchant data; determine one or more other merchants within a first threshold distance of the geographic location; obtain transaction data associated with the one or more other merchants and the merchant; and predict, based on the transaction data, at least one of a future number of transactions for the merchant in a future time period and a future transaction amount for the merchant in the future time period.
    Type: Application
    Filed: August 22, 2019
    Publication date: February 25, 2021
    Inventors: Hongqin Song, William Joseph Leddy, III, Yu Gu, Gary Denitus Dougan
  • Publication number: 20210004716
    Abstract: A global AI platform and a method for generating aggregated and ordered data sets are disclosed. Aggregated and ordered data sets are data sets that have been grouped, ordered, and for which one or more data values in the data set have been aggregated. As a result of their aggregation and ordering, aggregated and ordered data sets can be retrieved from a database and used more quickly than non-ordered, non-aggregated data sets. A data processor computer can receive a plurality of data sets, and from those data sets generate aggregated and ordered data sets that can subsequently be stored in an aggregated and ordered database. A data service computer can retrieve a subset of the aggregated and ordered data sets from the database, and use the subset as an input to an AI model that can be used to generate predictions that can be delivered to clients.
    Type: Application
    Filed: July 3, 2019
    Publication date: January 7, 2021
    Inventors: Hongqin Song, Yu Gu, Shawn Johnson, Kalpana Jogi, Claudia Barcenas, Xu Wang
  • Publication number: 20200409953
    Abstract: Described herein are a system and techniques for increasing the efficiency of generating a result set for a query. In some embodiments, the techniques may involve performing computations on a high-level element, sorting, and selecting a set of the high-level elements, and recursively repeating the process on sub-elements of the set of the high-level elements. The process may be recursively repeated until a specified level of granularity is reached. This may significantly decrease the number of computations that need to be performed, increasing the speed with which queries can be performed. In some embodiments, the process may involve identifying elements which may be highly correlated to optimal computation results and may add those elements to the result set to improve the accuracy of the result set.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Hongqin Song, Yu Gu, Shizhuo Yu, Raghunandan Surapaneni, Shawn Johnson, Kalpana Jogi
  • Publication number: 20200327243
    Abstract: Methods and systems are provided to efficiently update account profiles based on a predicted likelihood of use, including by ranking the account profiles according to the likelihood of use. The disclosed system can considerably improve the processing time to update account profiles with the most recent information available, including new access requests. An authentication platform receives a plurality of new access requests, including request data and account identifiers associated with account profiles. The request data is transmitted to a prediction engine that determines a ranking of the account identifiers based on a predicted likelihood of use during a next time interval. A profile batch scheduler retrieves a first set of access requests based on the ranking. The system updates a first set of account profiles based on the ranking, and stores the updated account profiles for use by the authentication platform.
    Type: Application
    Filed: April 10, 2019
    Publication date: October 15, 2020
    Inventors: Hongqin Song, Yu Gu
  • Publication number: 20200210897
    Abstract: A system, method, and computer program product for data placement may include obtaining feature data associated with a set of feature inputs of a machine learning model, determining a probability that a subset of the feature data is concurrently used as the set of feature inputs for the machine learning model, and storing the subset of the feature data on a same cache node or server of a plurality of cache nodes or servers based on the probability.
    Type: Application
    Filed: December 20, 2019
    Publication date: July 2, 2020
    Inventors: Yu Gu, Hongqin Song
  • Publication number: 20200151726
    Abstract: Embodiments of the invention are directed to systems and methods for utilizing a cache to store historical transaction data. A predictive model may be trained to identify particular identifiers associated with historical data that is likely to be utilized on a particular date and/or within a particular time period. The historical data corresponding to these identifiers may be stored in a cache of the processing computer. Subsequently, an authorization request message may be received that includes an identifier. The processing computer may utilize the identifier to retrieve historical transaction data from the cache. The retrieved data may be utilized to perform any suitable operation. By predicting the data that will be needed to perform these operations, and preemptively store such data in a cache, the latency associated with subsequent processing may be reduced and the performance of the system as a whole improved.
    Type: Application
    Filed: November 13, 2018
    Publication date: May 14, 2020
    Inventors: Hongqin Song, Yu GU, Dan WANG, Peter WALKER
  • Publication number: 20200050676
    Abstract: A data system may dynamically prioritize and ingest data so that, regardless of the memory size of the dataset hosted by the data system, it may process and analyze the hosted dataset in constant time. The system and method may implement a first space-efficient probabilistic data structure on the dataset, wherein the dataset includes a plurality of profile data. It may then receive update data corresponding to some of the plurality of profile data and implement a second space-efficient probabilistic data structure on the dataset including the update data. The system and method may then determine a set of non-shared profile data of the second space-efficient probabilistic data structure and prioritize the set of non-shared profile data of the second space-efficient probabilistic data structure over other profile data of the dataset for caching.
    Type: Application
    Filed: August 8, 2018
    Publication date: February 13, 2020
    Inventors: Peijie Li, Yu Gu, Hongqin Song
  • Publication number: 20190370800
    Abstract: Provided is a method for aggregating data from a plurality of sources. The method may include receiving a request comprising aggregation of interest data associated with a type of aggregation of interest and set identification data associated with a set of data. The set of data may be stored at a plurality of servers, and a subset of the set of data may be stored at each server. Each server may determine at least one subset value associated with the type of aggregation of interest for the respective subset of data stored thereon. The subset value may be received from each server. An aggregation value may be determined based on combining the subset values from each server. The aggregation value may be communicated to the user client. A system and computer program product are also disclosed.
    Type: Application
    Filed: May 31, 2019
    Publication date: December 5, 2019
    Inventors: Hongqin Song, Yu Gu