Patents by Inventor Travis Austin Wright

Travis Austin Wright has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240411609
    Abstract: System, methods, apparatuses, and computer program products are disclosed for auto-scaling of a deployment based on resource utilization data for a workload executing on the deployment. A resource availability is determined based on the resource utilization data and a current resource allocation of the deployment. A severity of resource throttling of the workload may be determined based on the resource utilization data, and a scaling factor is determined based at least on the severity of resource throttling. In response to at least the resource availability satisfying a predetermined condition with a predetermined threshold, the deployment is scaled based on the scaling factor.
    Type: Application
    Filed: September 22, 2023
    Publication date: December 12, 2024
    Inventors: Karla Jean SAUR, Joyce Yu CAHOON, Yiwen ZHU, Anna PAVLENKO, Jesus CAMACHO RODRIGUEZ, Brian Paul KROTH, Travis Austin WRIGHT, Michael Edward NELSON, David LIAO, Andrew Sherman CARTER
  • Patent number: 11030204
    Abstract: Performing a distributed query across a data pool includes receiving a database query at a master node or a compute pool within a database system. Based on receiving the database query, a data pool within the database system is identified. The data pool comprises a plurality of data nodes. Each data node includes a relational engine and relational storage. Each node in the data pool caches a different partition of data from an external data source in its relational storage. The database query is processed across the plurality of data nodes. Query processing includes requesting that data node perform a filter operation against its cached partition of the external data source stored in its relational storage and return any data from the partition that matches the filter operation.
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: June 8, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Stanislav A. Oks, Travis Austin Wright, Jasraj Uday Dange, Jarupat Jisarojito, Weiyun Huang, Stuart Padley, Umachandar Jayachandran, Sahaj Saini, William Maxwell Lerch
  • Publication number: 20190364109
    Abstract: Performing a distributed query across a storage pool includes receiving a database query at a master node or a compute pool within a database system. Based on receiving the database query, a storage pool within the database system is identified. The storage pool comprises a plurality of storage nodes. Each storage node includes a relational engine, a big data engine, and big data storage. The storage pool stores at least a portion of a data set using the plurality of storage nodes by storing a different partition of the data set within the big data storage at each storage node. The database query is processed across the plurality of storage nodes. Query processing includes requesting that each storage node perform a query operation against the partition of the data set stored in its big data storage and return any data from the partition that is produced by the query operation.
    Type: Application
    Filed: October 24, 2018
    Publication date: November 28, 2019
    Inventors: Stanislav A. Oks, Travis Austin Wright, Jasraj Uday Dange, Jarupat Jisarojito, Weiyun Huang, Stuart Padley, Umachandar Jayachandran, Sahaj Saini, William Maxwell Lerch
  • Publication number: 20190362004
    Abstract: Automatically provisioning resources within a database system includes receiving, at a master service of the database system, a declarative statement for performing a database operation. Based on receiving the declarative statement, a control plane is instructed that additional hardware resources are needed for performing the database operation. Based on instructing the control plane, a provisioning fabric provisions computer system hardware resources for one or more of (i) a storage pool that includes at least one storage node that comprises a first database engine, a big data engine, and big data storage; (ii) a data pool that includes at least one data node that comprises a second database engine and database storage; or (iii) a compute pool that includes a compute node that comprises a compute engine that processes queries at one or both of the storage pool or the data pool.
    Type: Application
    Filed: October 24, 2018
    Publication date: November 28, 2019
    Inventors: Stanislav A. Oks, Travis Austin Wright, Michael Edward Nelson, Pranjal Gupta, Scott Anthony Konersmann
  • Publication number: 20190361999
    Abstract: Processing a database query over a combination of relational and big data includes receiving the database query at a master node or a compute node within a database system. Based on receiving the database query, a storage pool within the database system is identified. The storage pool comprises a plurality of storage nodes, each storage node including a relational engine, a big data engine, and big data storage. The database query is processed over a combination of relational data stored within the database system and big data stored at the big data storage of at least one of the plurality of storage nodes. The relational data could be stored at the master node and/or at one or more data nodes. An artificial intelligence model and/or machine learning model might also be trained and/or scored using a combination of relational data and big data.
    Type: Application
    Filed: October 24, 2018
    Publication date: November 28, 2019
    Inventors: Stanislav A. Oks, Travis Austin Wright, Jasraj Uday Dange, Jarupat Jisarojito, Weiyun Huang, Stuart Padley, Umachandar Jayachandran
  • Publication number: 20190362011
    Abstract: Performing a distributed query across a data pool includes receiving a database query at a master node or a compute pool within a database system. Based on receiving the database query, a data pool within the database system is identified. The data pool comprises a plurality of data nodes. Each data node includes a relational engine and relational storage. Each node in the data pool caches a different partition of data from an external data source in its relational storage. The database query is processed across the plurality of data nodes. Query processing includes requesting that data node perform a filter operation against its cached partition of the external data source stored in its relational storage and return any data from the partition that matches the filter operation.
    Type: Application
    Filed: October 24, 2018
    Publication date: November 28, 2019
    Inventors: Stanislav A. Oks, Travis Austin Wright, Jasraj Uday Dange, Jarupat Jisarojito, Weiyun Huang, Stuart Padley, Umachandar Jayachandran, Sahaj Saini, William Maxwell Lerch