Patents by Inventor Noriaki TATSUMI

Noriaki TATSUMI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12010169
    Abstract: In some implementations, a system may monitor session data associated with a first module and a second module of a platform. The system may determine a rate of communication between the first module and the second module based on the session data. The system may determine, using an optimization model, a co-location score associated with the first module and the second module based on the rate of communication, wherein the co-location score indicates an impact of co-location of the first module and the second module. The system may determine that the co-location score satisfies a co-location score threshold associated with an improvement to an operation of the platform. The system may perform an action associated with co-locating the first module and the second module.
    Type: Grant
    Filed: December 7, 2022
    Date of Patent: June 11, 2024
    Assignee: Capital One Services, LLC
    Inventors: Christian Bartram, Connor Cason, Noriaki Tatsumi
  • Publication number: 20230195541
    Abstract: A cloud computing system can be configured to generate data models. A model optimizer of the cloud computing system can provision computing resources of the cloud computing system with a data model. A dataset generator of the cloud computing system can generate a synthetic dataset for training the data model. The computing resources can train the data model using the synthetic dataset. The model optimizer can store the data model and metadata of the data model in a model storage. The cloud computing system can receive production data from a data source by a production instance of the cloud computing system using a common file system. The production data can be processed using the data model by the production instance. The computing resources, the dataset generator, and the model optimizer can be hosted by separate virtual computing instances of the cloud computing system.
    Type: Application
    Filed: February 7, 2023
    Publication date: June 22, 2023
    Applicant: Capital One Services, LLC
    Inventors: Anh TRUONG, Fardin ABDI TAGHI ABAD, Jeremy GOODSITT, Austin WALTERS, Mark WATSON, Vincent PHAM, Noriaki TATSUMI, Michael WALTERS, Kate KEY, Reza FARIVAR, Kenneth TAYLOR
  • Publication number: 20230094964
    Abstract: In some implementations, a system may monitor session data associated with a first module and a second module of a platform. The system may determine a rate of communication between the first module and the second module based on the session data. The system may determine, using an optimization model, a co-location score associated with the first module and the second module based on the rate of communication, wherein the co-location score indicates an impact of co-location of the first module and the second module. The system may determine that the co-location score satisfies a co-location score threshold associated with an improvement to an operation of the platform. The system may perform an action associated with co-locating the first module and the second module.
    Type: Application
    Filed: December 7, 2022
    Publication date: March 30, 2023
    Inventors: Christian BARTRAM, Connor CASON, Noriaki TATSUMI
  • Patent number: 11615208
    Abstract: A cloud computing system can be configured to generate data models. A model optimizer of the cloud computing system can provision computing resources of the cloud computing system with a data model. A dataset generator of the cloud computing system can generate a synthetic dataset for training the data model. The computing resources can train the data model using the synthetic dataset. The model optimizer can store the data model and metadata of the data model in a model storage. The cloud computing system can receive production data from a data source by a production instance of the cloud computing system using a common file system. The production data can be processed using the data model by the production instance. The computing resources, the dataset generator, and the model optimizer can be hosted by separate virtual computing instances of the cloud computing system.
    Type: Grant
    Filed: October 4, 2018
    Date of Patent: March 28, 2023
    Assignee: Capital One Services, LLC
    Inventors: Anh Truong, Fardin Abdi Taghi Abad, Jeremy Goodsitt, Austin Walters, Mark Watson, Vincent Pham, Noriaki Tatsumi, Michael Walters, Kate Key, Reza Farivar, Kenneth Taylor
  • Patent number: 11546422
    Abstract: In some implementations, a system may monitor session data associated with a first module and a second module of a platform. The system may determine a rate of communication between the first module and the second module based on the session data. The system may determine, using an optimization model, a co-location score associated with the first module and the second module based on the rate of communication, wherein the co-location score indicates an impact of co-location of the first module and the second module. The system may determine that the co-location score satisfies a co-location score threshold associated with an improvement to an operation of the platform. The system may perform an action associated with co-locating the first module and the second module.
    Type: Grant
    Filed: January 8, 2021
    Date of Patent: January 3, 2023
    Assignee: Capital One Services, LLC
    Inventors: Christian Bartram, Connor Cason, Noriaki Tatsumi
  • Publication number: 20220269978
    Abstract: A data transformation system for implementing reproducible and consistent data transformations in multiple execution contexts (batch, streaming, etc.) where the transformation function/logic initially acts on historical raw data to produce derived data to train a machine learning model. When the model is trained and deployed to handle streaming event data, the same transformation is reused to transform streaming data into the appropriate derived data for the model scoring, and later for a refit of the model.
    Type: Application
    Filed: February 22, 2021
    Publication date: August 25, 2022
    Inventors: Michael Edwards, Lindsay Sturm, Christopher Larson, Noriaki Tatsumi, Keira Zhou, Sinan Gul, Mesfin Mulugeta Dinku, Bhanu Gupta, Christian Bartram, Connor Cason
  • Publication number: 20220224753
    Abstract: In some implementations, a system may monitor session data associated with a first module and a second module of a platform. The system may determine a rate of communication between the first module and the second module based on the session data. The system may determine, using an optimization model, a co-location score associated with the first module and the second module based on the rate of communication, wherein the co-location score indicates an impact of co-location of the first module and the second module. The system may determine that the co-location score satisfies a co-location score threshold associated with an improvement to an operation of the platform. The system may perform an action associated with co-locating the first module and the second module.
    Type: Application
    Filed: January 8, 2021
    Publication date: July 14, 2022
    Inventors: Christian BARTRAM, Connor CASON, Noriaki TATSUMI
  • Publication number: 20210357817
    Abstract: Various embodiments are generally directed to techniques to reduce inputs of a machine learning model (MLM) and increase path efficiency as a result. A method for reducing an MLM includes: receiving a machine learning (ML) dataset, partitioning the ML dataset into a first dataset, a second dataset, a third dataset, and a fourth dataset, training, validating, and testing the MLM using one or more of the first dataset, the second dataset, and the third dataset, after testing the MLM, automatically ranking an importance associated with each input of the MLM using the fourth dataset, and reducing a plurality of inputs of the MLM based on the automatic ranking.
    Type: Application
    Filed: July 29, 2021
    Publication date: November 18, 2021
    Applicant: Capital One Services, LLC
    Inventors: Mark Louis WATSON, Austin Grant WALTERS, Jeremy Edward GOODSITT, Anh TRUONG, Noriaki TATSUMI, Vincent PHAM, Fardin ABDI TAGHI ABAD, Kate KEY
  • Patent number: 11107004
    Abstract: Various embodiments are generally directed to techniques to reduce inputs of a machine learning model (MLM) and increase path efficiency as a result. A method for reducing an MLM includes: receiving a machine learning (ML) dataset, partitioning the ML dataset into a first dataset, a second dataset, a third dataset, and a fourth dataset, training, validating, and testing the MLM using one or more of the first dataset, the second dataset, and the third dataset, after testing the MLM, automatically ranking an importance associated with each input of the MLM using the fourth dataset, and reducing a plurality of inputs of the MLM based on the automatic ranking.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: August 31, 2021
    Assignee: Capital One Services, LLC
    Inventors: Mark Louis Watson, Austin Grant Walters, Jeremy Edward Goodsitt, Anh Truong, Noriaki Tatsumi, Vincent Pham, Fardin Abdi Taghi Abad, Kate Key
  • Publication number: 20210208956
    Abstract: A method for microservices architecture optimization is disclosed. The method includes receiving a first application request message at a gateway service, and generating a first client request message, by the gateway service, based on the first application request message. The first client request message may have a customized header comprising an identification of the gateway service. The method may include forwarding the first client request message from the gateway service to a first service of a plurality of services, and the first service updating the customized header of the first client request message to add an identification of the first service. The method includes generating, at the first service, a first client response message comprising the customized header, and generating, at the gateway service, a deployment scheme for a subset of services of the plurality of services based on the customized header of the first client response message.
    Type: Application
    Filed: October 28, 2020
    Publication date: July 8, 2021
    Applicant: Capital One Services, LLC
    Inventor: Noriaki TATSUMI
  • Publication number: 20210042656
    Abstract: Various embodiments are generally directed to techniques to reduce inputs of a machine learning model (MLM) and increase path efficiency as a result. A method for reducing an MLM includes: receiving a machine learning (ML) dataset, partitioning the ML dataset into a first dataset, a second dataset, a third dataset, and a fourth dataset, training, validating, and testing the MLM using one or more of the first dataset, the second dataset, and the third dataset, after testing the MLM, automatically ranking an importance associated with each input of the MLM using the fourth dataset, and reducing a plurality of inputs of the MLM based on the automatic ranking.
    Type: Application
    Filed: August 8, 2019
    Publication date: February 11, 2021
    Applicant: Capital One Services, LLC
    Inventors: Mark Louis Watson, Austin Grant Walters, Jeremy Edward Goodsitt, Anh Truong, Noriaki Tatsumi, Vincent Pham, Fardin Abdi Taghi Abad, Kate Key
  • Patent number: 10855812
    Abstract: A method for microservices architecture optimization is disclosed. The method includes receiving a first application request message at a gateway service, and generating a first client request message, by the gateway service, based on the first application request message. The first client request message may have a customized header comprising an identification of the gateway service. The method may include forwarding the first client request message from the gateway service to a first service of a plurality of services, and the first service updating the customized header of the first client request message to add an identification of the first service. The method includes generating, at the first service, a first client response message comprising the customized header, and generating, at the gateway service, a deployment scheme for a subset of services of the plurality of services based on the customized header of the first client response message.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: December 1, 2020
    Assignee: Capital One Services, LLC
    Inventor: Noriaki Tatsumi
  • Patent number: 10592386
    Abstract: Automated systems and methods for optimizing a model are disclosed. For example, in an embodiment, a method for optimizing a model may comprise receiving a data input that includes a desired outcome and an input dataset identifier. The method may include retrieving an input dataset based on the identifier and receiving an input model based on the desired outcome. The method may also comprise using a data synthesis model to create a synthetic dataset based on the input dataset and a similarity metric. The method may also comprise debugging the input model using synthetic dataset to create a debugged model. The method may also comprise selecting an actual dataset based on the input dataset and the desired outcome. In some aspects, the method may comprise optimizing the debugged model using the actual dataset and storing the optimized model.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: March 17, 2020
    Assignee: Capital One Services, LLC
    Inventors: Austin Walters, Jeremy Goodsitt, Anh Truong, Fardin Abdi Taghi Abad, Mark Watson, Vincent Pham, Kate Key, Reza Farivar, Noriaki Tatsumi
  • Publication number: 20200012584
    Abstract: Automated systems and methods for optimizing a model are disclosed. For example, in an embodiment, a method for optimizing a model may comprise receiving a data input that includes a desired outcome and an input dataset identifier. The method may include retrieving an input dataset based on the identifier and receiving an input model based on the desired outcome. The method may also comprise using a data synthesis model to create a synthetic dataset based on the input dataset and a similarity metric. The method may also comprise debugging the input model using synthetic dataset to create a debugged model. The method may also comprise selecting an actual dataset based on the input dataset and the desired outcome. In some aspects, the method may comprise optimizing the debugged model using the actual dataset and storing the optimized model.
    Type: Application
    Filed: October 26, 2018
    Publication date: January 9, 2020
    Applicant: CAPITAL ONE SERVICES, LLC
    Inventors: Austin WALTERS, Jeremy GOODSITT, Anh TRUONG, Fardin ABDI TAGHI ABAD, Mark WATSON, Vincent PHAM, Kate KEY, Reza FARIVAR, Noriaki TATSUMI
  • Publication number: 20200012890
    Abstract: A cloud computing system can be configured to generate a synthetic data stream that tracks a reference data stream. A model optimizer of the cloud computing system can receive, from an interface of the cloud computing system, a synthetic data stream request indicating a reference data stream. A dataset generator of the cloud computing system can generate a synthetic data stream that tracks the reference data stream by repeatedly swapping data models of the reference data stream. One such repeat can include retrieving, by the dataset generator from a model storage, a current data model of the reference data stream and generating a new data model of the reference data stream. The model optimizer can store the new data model in the model storage. The dataset generator can generate a synthetic data stream using the current data model of the reference data stream.
    Type: Application
    Filed: October 4, 2018
    Publication date: January 9, 2020
    Applicant: Capital One Services, LLC
    Inventors: Mark WATSON, Anh TRUONG, Fardin ABDI TAGHI ABAD, Jeremy GOODSITT, Austin WALTERS, Michael WALTERS, Noriaki TATSUMI, Kate KEY
  • Publication number: 20200012933
    Abstract: A cloud computing system can be configured to generate data models. A model optimizer of the cloud computing system can provision computing resources of the cloud computing system with a data model. A dataset generator of the cloud computing system can generate a synthetic dataset for training the data model. The computing resources can train the data model using the synthetic dataset. The model optimizer can store the data model and metadata of the data model in a model storage. The cloud computing system can receive production data from a data source by a production instance of the cloud computing system using a common file system. The production data can be processed using the data model by the production instance. The computing resources, the dataset generator, and the model optimizer can be hosted by separate virtual computing instances of the cloud computing system.
    Type: Application
    Filed: October 4, 2018
    Publication date: January 9, 2020
    Applicant: Capital One Services, LLC
    Inventors: Anh TRUONG, Fardin ABDI TAGHI ABAD, Jeremy GOODSITT, Austin WALTERS, Mark WATSON, Vincent PHAM, Noriaki TATSUMI, Michael WALTERS, Kate KEY, Reza FARIVAR, Kenneth TAYLOR