METHOD AND SYSTEM FOR ADAPTIVE PRE-FETCHING OF PAGES INTO A BUFFER POOL

The present disclosure provides a method for pre-fetching one or more pages from a database stored on a data storage device is provided. The one or more pages are pre-fetched in a corresponding buffer pool of one or more buffer pools. The method includes initiating the pre-fetching of the one or more pages from the database stored on the data storage device into the corresponding buffer pool of the one or more buffer pools, enabling a decision for pre-fetching the one or more pages from the database to the buffer pool of the one or more buffer pools based on a calculated probability score and fetching the one or more pages asynchronously from the database into the buffer pool of the one or more buffer pools based on the decision. The pre-fetching is done at any instant of time.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
TECHNICAL FIELD

The present disclosure relates to the field of database management systems. More specifically, the disclosure relates to pre-fetching of one or more pages into one or more buffer pools of database management systems.

BACKGROUND

With the advent of technological advancements, huge amount of digital data is generated every day from a variety of sources. These sources can be the companies/firms/corporations/government bodies/banks/retail chains involved in the online and offline business which utilizes technology as a part of their business. These sources want to analyze the data on a regular basis in order to ensure a continuous and smooth running of systems as well as to have in-depth insights. This kind of data is known as big data which has become a latest trend across many areas of business and technology.

In general, the data is processed by traditional processing systems for performing one or more tasks. The existing database systems keep a track of the data, store the data, and continuously update the data in regular intervals of time and so on. These database management systems handles millions of transactions and millions of requests on day to day basis. These database management systems employ complex algorithms to look for repeatable patterns while handling the data with an extended amount of metadata. Furthermore, these database management systems are employed in various sectors including banking sector, e-commerce sector, industrial sector and the like which require continuous processing of the data in order to ensure smooth running of business.

Moreover, the database management systems manage the data efficiently and allow users to perform tasks with utmost ease. In addition, the database management systems increase the efficiency of business operations and overall costs. Further, the database management systems are located in a non-volatile storage of various systems. Examples of the database management systems include Oracle, Microsoft SQL Server, Sybase, Ingress, Informix and the like. The database management system is stored on a database associated with a disk storage system such as a hard disk drive or solid state drive.

In general, the database is an organized collection of schemes, tables, queries, reports, views and other objects. The database can be stored on a stable storage system like a hard disk and placed in a remote or local area along with a network. The database can be accessed, edited and updated through a server that transfers the requests to the processor associated with the database. Moreover, the database management system handles request from various client machines. In addition, the database management systems make one or more changes in response to the requests.

Going further, the database management systems store the data in memory for continuous use in future. Moreover, as technology and computing is evolving, need for more refined data, memory and process handling algorithms and techniques are required at developer end to bridge the gap of inefficient, delayed, failed transfer and commit of records in the database. The huge amount of data needs to be stored in the database and at the same time required for future purposes also in order to increasing performance of applications. However, there is a limit to an amount of the data that can be stored depending on the memory space in the database due to which some amount of data is consistently flushed from the disk to intake new data. This problem is addressed by using a mechanism for caching the data into a volatile memory associated with the database.

The data is consistently cached into a random access memory for temporary storage of the data. Moreover, the data is cached for responding to requests from the client machines swiftly by reading the data pre-stored in the random access memory. This data corresponds to a recently accessed data by the users. In addition, the random access memory includes one or more buffer pools for reading and storing the data. As known in the art, a buffer pool is a place in system memory or a disk that is used for caching table and index data pages as they are modified or read from the disk. Further, the buffer pool caches disk blocks to optimize block I/O. Furthermore, the primary purpose of the buffer pool is to reduce database file input/output (I/O) and improve the response time for data retrieval. The database writes the data in form of pages into the buffer pool.

Typically, only clean pages are written into the buffer pool for minimizing the risk of data loss. In addition, the buffer pool may be associated with a single database and may be used by more than one table space. Moreover, the buffer pool space is allocated based on the requirement of the user. Further, an adequate buffer pool size is essential for good database performance as it reduces disk I/O which is the most time consuming operation. Large buffer pools also have an effect on query optimization as more of the work can be done in memory.

Going further, the buffer pool helps in increasing the process of retrieval of the data by performing pre-fetching of one or more pages from the database on a regular basis. The one or more pages contain information related to a recently accessed data by the users. The buffer pool follows a predictive approach for pre-fetching the one or more pages from the database. The buffer pool utilizes an adaptive algorithm for predicting random usage patterns for pre-fetching the one or more pages. The pre-fetching is done for responding to any request made to a database server from a client machine in a swift manner by searching the data in the buffer pool rather than the entire database. Moreover, the buffer pool flushes some pages from the memory to the database for reading new pages.

In addition, the buffer pool performs the reading of the one or more pages from the database by utilizing one or more methods. One of the methods for pre-fetching the one or more pages from the database is asynchronous pre-fetching of pages. In the asynchronous pre-fetching of pages, the buffer pool pre-fetches one or more pages from the database even when the user does not send any request to the database server to read some data. The buffer pool performs a probabilistic task of pre-fetching the one or more pages for future use when the user will request some data. The buffer pool utilizes a predictive algorithm for pre-fetching the pages based on a history of the pages requested. The pre-fetched pages are fetched for increasing performance of database engine.

Moreover, the buffer pool performs the reading of the one or more pages from the database by utilizing another method of the one or more methods. Further, the another method for pre-fetching the one or more pages from the database is synchronous pre-fetching of pages. In the synchronous pre-fetching of pages, the buffer pool pre-fetches one or more pages from the database when the user requests the database server to read some data. The buffer pool utilizes a predictive algorithm for pre-fetching the pages based on a current page requested. The pre-fetched pages are fetched for increasing performance of database engine.

However, the current systems and methods for pre-fetching the pages asynchronously are not accurate enough and more often than not, the buffer pools pre-fetch pages which may not be required for future usage. This leads to wastage of memory and degrades the performance of the database engine. In addition, the current technique for the asynchronous pre-fetching of the pages decreases the throughput. Moreover, the non accuracy of the pre-fetching techniques increases seek time and latency time of the database engine. Further, a lack of adaptive pre-fetching and flushing methods limits the efficiency of the overall I/O activity which in turn affects the overall performance of the system. Furthermore, the present systems and methods increase the response time of the application. Moreover, a probability of finding a random page in the memory is decreased.

In light of the above stated discussion, there is a need for a method and system that overcomes the above stated disadvantages and enhances the performance of the buffer pool and thereby the overall performance of user's applications.

SUMMARY

In an aspect of the present disclosure, a method for pre-fetching one or more pages from a database stored on a data storage device is provided. The one or more pages are pre-fetched in a corresponding buffer pool of one or more buffer pools. The method includes initiating the pre-fetching of the one or more pages from the database stored on the data storage device into the corresponding buffer pool of the one or more buffer pools, enabling a decision for pre-fetching the one or more pages from the database to the buffer pool of the one or more buffer pools based on a calculated probability score and fetching the one or more pages asynchronously from the database into the buffer pool of the one or more buffer pools based on the decision. The pre-fetching is done at any instant of time.

In an embodiment of the present disclosure, the pre-fetching includes checking a first plurality of parameters associated with an extent of one or more extents associated with the corresponding buffer pool of the one or more buffer pools, predicting the pre-fetching of the one or more pages from the database stored on the data storage device and calculating the probability score for pre-fetching the one or more pages from the database to the buffer pool of the one or more buffer pools based on a second plurality of parameters. The first plurality of parameters includes a size of the extent of the one or more extents and an order associated with the extent of the one or more extents. The predicting is performed by checking a preceding sequence of page IDs in the corresponding extent of the one or more extents. The second plurality of parameters includes a number of the page IDs in the preceding sequence of the size of the extent of the one or more extents and an actual size of the extent of the one or more extents.

In another embodiment of the present disclosure, the extent of the one or more extents is associated with the corresponding buffer pool of the one or more buffer pools. In yet another embodiment of the present disclosure, the order of the extent of the one or more extents is in a sorted increasing order.

In an embodiment of the present disclosure, the method further includes storing the one or more pages pre-fetched asynchronously from the database into the corresponding buffer pool of the one or more buffer pools.

In another embodiment of the present disclosure, the buffer pool of the one or more buffer pools is dynamically selected for storing the pre-fetched one or more pages.

In an embodiment of the present disclosure, the fetching further includes allowing the buffer pool of the one or more buffer pools to pre-fetch the one or more pages.

In an embodiment of the present disclosure, the method further includes analyzing an availability of a list. The availability is analyzed for unlocking a page list and a LRU list associated with the corresponding buffer pool of the one or more buffer pools. The page list and the LRU list is unlocked when the list is available. The analyzing is done repetitively.

In an embodiment of the present disclosure, the method further includes updating a read in progress list based on a non-availability of a free list.

In another aspect of the present disclosure, a method for pre-fetching one or more pages from a database stored on a data storage device is provided. The one or more pages are pre-fetched in a corresponding buffer pool of one or more buffer pools. The method includes receiving a request from a user in form of a query, initiating the pre-fetching of the one or more pages from the database stored on the data storage device into the corresponding buffer pool of the one or more buffer pools based on the received request, enabling a decision for pre-fetching the one or more pages from the database to the buffer pool of the one or more buffer pools and fetching the one or more pages from the database into the buffer pool of the one or more buffer pools based on the decision. The request is received for reading data from the database stored on the data storage device. The request is received in real time. The pre-fetching is done by reading the one or more pages from the database to the buffer pool of the one or more buffer pools. The decision for pre-fetching is taken based on the calculated probability score.

In an embodiment of the present disclosure, the pre-fetching includes checking a plurality of parameters associated with an extent of one or more extents associated with the corresponding buffer pool of the one or more buffer pools, predicting the pre-fetching of the one or more pages from the database stored on the data storage device and calculating a probability score for pre-fetching the one or more pages from the database to the buffer pool of the one or more buffer pools. The plurality of parameters includes a size of the extent of the one or more extents and an order associated with the extent of the one or more extents. The predicting is performed by checking a preceding sequence of page IDs in the corresponding extent of the one or more extents.

In another embodiment of the present disclosure, the extent of the one or more extents is associated with the corresponding buffer pool of the one or more buffer pools. In yet another embodiment of the present disclosure, the order of the extent of the one or more extents is in a sorted increasing order.

In an embodiment of the present disclosure, the method further includes reading the data requested by the user from the corresponding buffer pool of the one or more buffer pools currently storing the data. The data storage device includes at least one of a hard disk drive and a solid state drive.

In an embodiment of the present disclosure, the method further includes storing the one or more pages pre-fetched from the database into the corresponding buffer pool of the one or more buffer pools.

In an embodiment of the present disclosure, the method further includes updating a read in progress list based on a pre-determined criterion. The pre-determined criterion includes a page list is not null.

In an embodiment of the present disclosure, the method further includes analyzing whether the database wants to read single data or not. The analyzing is done at regular intervals of time. The analyzing is done when the prediction for the pre-fetching is correct.

In yet another aspect of the present disclosure, a computer-program product for pre-fetching one or more pages from a database stored on a data storage device is provided. The one or more pages are pre-fetched in a corresponding buffer pool of one or more buffer pools. The computer-program product includes a computer readable storage medium having a computer program stored thereon for performing the steps of initiating the pre-fetching of the one or more pages from the database stored on the data storage device into the corresponding buffer pool of the one or more buffer pools, enabling a decision for pre-fetching the one or more pages from the database to the buffer pool of the one or more buffer pools based on a calculated probability score and fetching the one or more pages asynchronously from the database into the buffer pool of the one or more buffer pools based on the decision. The pre-fetching is done at any instant of time.

In an embodiment of the present disclosure, the pre-fetching includes checking a first plurality of parameters associated with an extent of one or more extents associated with the corresponding buffer pool of the one or more buffer pools, predicting the pre-fetching of the one or more pages from the database stored on the data storage device and calculating the probability score for pre-fetching the one or more pages from the database to the buffer pool of the one or more buffer pools based on a second plurality of parameters. The first plurality of parameters includes a size of the extent of the one or more extents and an order associated with the extent of the one or more extents. The predicting is performed by checking a preceding sequence of page IDs in the corresponding extent of the one or more extents. The second plurality of parameters includes a number of the page IDs in the preceding sequence of the size of the extent of the one or more extents and an actual size of the extent of the one or more extents.

In an embodiment of the present disclosure, the computer-program product further includes storing the one or more pages pre-fetched asynchronously from the database into the corresponding buffer pool of the one or more buffer pools.

BRIEF DESCRIPTION OF THE FIGURES

For a more complete understanding of example embodiments of the present technology, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:

FIG. 1 illustrates an overview of a database management system for performing management of buffer memory associated with a database, in accordance with various embodiments of the present disclosure.

FIG. 2 illustrates a flowchart for performing an asynchronous pre-fetching of the one or more pages from the database, in accordance with various embodiments of the present disclosure.

FIG. 3 illustrates another flowchart for performing the asynchronous pre-fetching of the one or more pages from the database.

FIG. 4 illustrates a flowchart for performing synchronous pre-fetching of the one or more pages from the database, in accordance with various embodiments of the present disclosure.

FIG. 5 illustrates another flowchart for performing the synchronous pre-fetching of the one or more pages from the database.

FIG. 6 depicts a block diagram of a computing device for practicing various embodiments of the present disclosure.

DETAILED DESCRIPTION

In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present technology. It will be apparent, however, to one skilled in the art that the present technology can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form only in order to avoid obscuring the present technology.

Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present technology. The appearance of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.

Moreover, although the following description contains many specifics for the purposes of illustration, anyone skilled in the art will appreciate that many variations and/or alterations to said details are within the scope of the present technology. Similarly, although many of the features of the present technology are described in terms of each other, or in conjunction with each other, one skilled in the art will appreciate that many of these features can be provided independently of other features. Accordingly, this description of the present technology is set forth without any loss of generality to, and without imposing limitations upon, the present technology.

FIG. 1 illustrates an overview of a database management system 100 for performing management of buffer memory associated with a database, in accordance with various embodiments of the present disclosure. In addition, the database management system 100 is configured for pre-fetching one or more pages from a database stored on a data storage device. The database management system 100 includes a query 102, a random access memory 104 and a primary storage system 106. Moreover, the random access memory 104 includes one or more buffer pools 104a. In addition, the primary storage system 106 includes a database 108. In an embodiment of the present disclosure, the database management system 100 is configured for managing operations of the buffer memory for increasing performance. In addition, the one or more buffer pools 104a are associated with a database management system (DBMS), a database engine, information retrieval systems and the like.

Further, in an embodiment of the present disclosure, the database management system 100 handles pre-fetching of one or more pages from the database 108 of the primary storage system 106 by performing one or more operations. In another embodiment of the present disclosure, the database management system 100 provides an improved method for pre-fetching of the one or more pages for increasing the performance. In yet another embodiment of the present disclosure, the buffer management system 100 enables a faster access to the one or more pages requested by a user in real time. In yet another embodiment of the present disclosure, the buffer management system 100 increases a probability of finding a random page of the one or more pages (as described below in the patent application). In an embodiment of the present disclosure, the database management system 100 performs asynchronous pre-fetching of the one or more pages. In another embodiment of the present disclosure, the database management system 100 performs synchronous pre-fetching of the one or more pages.

In an embodiment of the present disclosure, the user present at a client's end provides one or more inputs. In another embodiment of the present disclosure, the one or more inputs may be any type of input for carrying out one or more processes at both ends of an application. Moreover, the one or more processes may be any process including requesting a page, carrying out a transaction in real time and the like. Further, the user maintains a plurality of records in the database 108. In an embodiment of the present disclosure, the plurality of records corresponds to the one or more pages accessed by the user over a period of time. In addition, the one or more pages are updated each time the user sends a request for accessing the one or more pages.

Moreover, the one or more inputs or requests provided by the user are provided to the database management system 100 in form of the query 102. In an embodiment of the present disclosure, the query 102 is generated by a query processing system. In addition, the query processing system utilizes the one or more inputs from the user for generating the query 102. Further, the query 102 is generated for carrying out any type of updating in the database 108. In an embodiment of the present disclosure, the query processing system may be any system for transferring of the query 102 from the client's end. In another embodiment of the present disclosure, the query processing system formats the one or more inputs into the query 102 for responding to user requests. Examples of the query processing system include but may not be limited to sql server, no sql server and apache server.

Further, the query 102 may be provided for performing one or more operations on the database 108. In an embodiment of the present disclosure, the one or more operations are performed by executing one or more changes in the database 108. The one or more operations include update, delete, write and the like. Going further, the query processing system is connected to the random access memory 104. In an embodiment of the present disclosure, the query processing system is connected to the random access memory 104 through a bus. Moreover, the random access memory 104 is configured for storing the one or more pages randomly in memory. The random access memory 104 enables accessing of data in a random way. In an embodiment of the present disclosure, the random access memory 104 enables a faster access to a specific set of data stored.

In an embodiment of the present disclosure, the random access memory 104 may have any amount of memory for performing one or more operations and storing the one or more pages in the memory. The random access memory 104 may be of 1 GB, 2 GB, 4 GB, 8 GB and the like. In an embodiment of the present disclosure, the one or more pages are temporarily stored in the random access memory 104. In an embodiment of the present disclosure, the random access memory 104 is a volatile memory which stores the one or more pages up to a point till a system is in power on state.

Moreover, the random access memory 104 includes one or more buffer pools 104a. In addition, a portion of the random access memory is occupied by the one or more buffer pools 104a. In an embodiment of the present disclosure, the random access memory 104 includes a buffer cache. In another embodiment of the present disclosure, the buffer cache includes the one or more buffer pools 104a. In an embodiment of the present disclosure, the one or more queriesgenerated from the query processing system 102 are transferred to the one or more buffer pools 104a. In an embodiment of the present disclosure, the one or more buffer pools 104a perform one or more operations based on the query 102 received from the query processing system.

Moreover, the random access memory 104 is connected to the primary storage system 106. In an embodiment of the present disclosure, the random access memory 104 is connected to the primary storage system 106 through a bus. Further, the primary storage system 106 includes the database 108 for storing one or more information associated with one or more processes. In an embodiment of the present disclosure, the primary storage system is a hard disk. In another embodiment of the present disclosure, the primary storage system 106 is a solid state drive (SSD). In an embodiment of the present disclosure, the primary storage system 106 stores the one or more pages permanently. Moreover, the one or more pages are stored after updation in the one or more buffer pools 104a.

Further, the primary storage system 106 stores the one or more pages in the one or more buffer pools 104a and requests the one or more buffer pools 104a for the one or more pages stored for faster application performance. In an embodiment of the present disclosure, the one or more pages are retrieved from the one or more buffer pools 104a when the user sends a request for access to any information or data which has been accessed earlier. In an embodiment of the present disclosure, the primary storage system 104 transfers the data in form of the one or more pages to the one or more buffer pools 104a. In an embodiment of the present disclosure, the primary storage system 106 transfers the data at a regular basis. The data is sent from the database 108 to the one or more buffer pools 104a. In an embodiment of the present disclosure, the primary storage system 106 may have any amount of memory depending on a user requirement.

Further, the one or more buffer pools 104a may be of any size. The size of the one or more buffer pools 104a includes but may not be limited to 32 MB. 64 MB, 128 MB, 256 MB. In an embodiment of the present disclosure, the database management system 100 allocates memory for the one or more buffer pools 104a in a dynamic way. Going further, the one or more buffer pools 104a is a place or location in a system memory or disk. Moreover, the one or more buffer pools 104a are utilized for caching of table and index pages. In an embodiment of the present disclosure, the one or more buffer pools 104a caches the tables and indexes the data pages after reading from the database 108.

In an embodiment of the present disclosure, the one or more buffer pools 104a are configured for improving a response time for retrieval of data stored in the one or more buffer pools 104a. Moreover, the data is stored in form of tables in the one or more buffer pools 104a. In an embodiment of the present disclosure, the database 108 writes the one or more pages into the one or more buffer pools 104a. In an embodiment of the present disclosure, the one or more pages are written for allowing faster access to the one or more pages when requested by a database server. In an embodiment of the present disclosure, the one or more pages are clean pages. Moreover, the size of the one or more buffer pools 104a is kept high in order to provide a good performance of the database 108.

Going further, each of the one or more buffer pools 104a contains a page cache for caching one or more pages of files. In an embodiment of the present disclosure, the database management system 100 includes unified buffer pools and page caches. In an embodiment of the present disclosure, I/O is performed in the page cache. In an embodiment of the present disclosure, the cached data is represented as a file as well as a block. In an embodiment of the present disclosure, the one or more buffer pools 104a store a single instance of a data.

Moreover, the database 108 is configured for reading and writing data stored in the one or more pages from the one or more buffer pools 104a. In an embodiment of the present disclosure, the one or more buffer pools 104a are maintained based on least recently used basis. In another embodiment of the present disclosure, the one or more buffer pools 104a perform eviction of pages of the one or more pages stored based on the least recently used basis. In addition, the one or more buffer pools 104a enable control over deciding a budget over the memory to be utilized for performing the one or more operations. Further, each of the one or more buffer pools 104a maintains a data structure and one or more background workers.

In an embodiment of the present disclosure, each of the one or more buffer pools 104a maintains a hash table for gaining faster access to the data. In an embodiment of the present disclosure, each of the one or more buffer pools 104a maintain a least recently used (LRU) list and a dirty page list for implementing temporal locality and enabling flushing of right set of pages at any given time. Moreover, each of the one or more buffer pools 104a is managed on a least recently used (LRU) basis for storing a data block in a most recently used buffer available for subsequent access by an application. The least recently used mechanism helps in flushing the pages which have been used least recently.

In another embodiment of the present disclosure, the pages are flushed when the one or more buffer pools 104a are full and require free space for storing more pages from the database 108. In an embodiment of the present disclosure, the dirty page list in each of the one or more buffer pools 104a store pages which are updated with data but are not written onto the database 108 in a disk. In an embodiment of the present disclosure, the one or more buffer pools 104a request for new data after updated data pages are written to the database 108.

In an embodiment of the present disclosure, each of the one or more buffer pools 104a contains a free page list for utilizing based on a requirement. Moreover, the background workers are configured for checking health of each of the one or more buffer pools 104a on a frequent basis. In addition, the background workers are configured to check whether the pages are flushed or reclaimed accordingly. In an embodiment of the present disclosure, the one or more buffer pools 104a implement a quasi adaptive algorithm for the flushing of the pages. In another embodiment of the present disclosure, the quasi adaptive algorithm ensures that the database 108 is not in a pause state for a long time during shortage of free pages in the one or more buffer pools 104a.

In an embodiment of the present disclosure, the background workers perform the reading and the writing of the data for avoiding separate page writes. Going further, the one or more buffer pools 104a handle the one or more pages requested by an operating system database. In an embodiment of the present disclosure, each of the one or more buffer pools 104a caches the one or more pages in 8 Kb blocks. The one or more pages contain a recently accessed data by the user and a predicted data that would be required in future by the user.

Moreover, the user requests a page of the one or more pages from the server of the database 108. In addition, the server of the database 108 accesses a buffer pool of the one or more buffer pools 104a for accessing the page of the one or more pages stored in the buffer pool. Further, the buffer pool of the one or more buffer pools 104a checks for the requested page of the one or more pages in the memory. In an embodiment of the present disclosure, the buffer pool of the one or more buffer pools 104a returns the page to the user if the page is in the memory. In an embodiment of the present disclosure, the buffer pool performs a logical read. In an embodiment of the present disclosure, latency time during the logical read is in nanoseconds.

In an embodiment of the present disclosure, the buffer pool of the one or more buffer pools 104a performs a physical read when the page requested is not found in the buffer memory. The buffer pool of the one or more buffer pools 104a accesses the database 108 for reading the page of the one or more pages requested by the user. Moreover, the buffer pool of the one or more buffer pools 104a read the page from the database 108 into memory of the buffer pool of the one or more buffer pools 104a. In addition, the buffer pool of the one or more buffer pools 104a returns the page to the user. In an embodiment of the present disclosure, the latency time during the physical read is in milliseconds. In an embodiment of the present disclosure, one or more changes are made into the one or more buffer pools 104a during the reading and the writing of the one or more pages.

Going further, the one or more buffer pools 104a perform pre-fetching of the one or more pages in the memory from the database 108 in two ways. Moreover, the two ways include synchronous pre-fetching of the one or more pages and the asynchronous pre-fetching of the one or more pages. In an embodiment of the present disclosure, the synchronous pre-fetching and the asynchronous pre-fetching are performed by the database 108. In another embodiment of the present disclosure, the synchronous pre-fetching and the asynchronous pre-fetching are performed in conjunction by the database 108 and the one or more buffer pools 104a. In an embodiment of the present disclosure, the user input is changed into a key for reading the requested data. In an embodiment of the present disclosure, the page is searched based on a page id provided by the database server after receiving the request.

Going further, the one or more buffer pools 104a perform pre-fetching of pages based on a pre-determined criterion. In addition, the pre-determined criterion includes anticipating a need for the one or more pages which would be required in future for reading and writing. Moreover, the pre-fetching of pages is done for allowing faster access to the one or more pages. In an embodiment of the present disclosure, the one or more buffer pools 104a predicts a set of pages which would be required for use and fetches the set of pages from the database 108 at any time. In an embodiment of the present disclosure, the pre-fetching of the one or more pages is done based on an adaptive algorithm.

In an embodiment of the present disclosure, the one or more pages are pre-fetched into the one or more buffer pools 104a when the data is read from the database 108 in an asynchronous method. In an embodiment of the present disclosure, the pre-fetching of the one or more pages assists in storing the one or more pages in the one or more buffer pools 104a based on a prediction for requirement of the one or more pages in the future. In an embodiment of the present disclosure, each of the one or more buffer pools 104a includes a pre-fetcher. Moreover, the pre-fetcher is configured for carrying out the pre-fetching of the one or more pages from the database 108.

In an embodiment of the present disclosure, the pre-fetching of the one or more pages is performed based on past access patterns. In an embodiment of the present disclosure, the pre-fetching is done for storing a recently accessed data. In another embodiment of the present disclosure, the pre-fetching of the one or more pages is done for speeding up the application performance. In yet another embodiment of the present disclosure, the pre-fetching is done for optimizing temporal locality and spatial locality. In an embodiment of the present disclosure, the pre-fetching of the one or more pages is performed based on a probabilistic calculation of the one or more pages which would be required at a later stage. In an embodiment of the present disclosure, the pre-fetching is performed for decreasing a seek time.

Moreover, in an embodiment of the present disclosure, each of the one or more buffer pools 104a includes one or more background workers. In addition, the one or more buffer pools 104a pre-fetch a set of pages from the database 108 even when the user does not request for the page from the database server. In an embodiment of the present disclosure, the one or more background workers perform the task of fetching the set of pages when no request is received from the user at the client's end. In an embodiment of the present disclosure, the pre-fetching is done for predicting the set of pages that would be required for future usage.

In an embodiment of the present disclosure, the one or more background workers fetch the set of pages based on a computation (as described below in the patent application). In an embodiment of the present disclosure, the set of pages are randomly accessed pages. Moreover, the set of pages are read from the database from time to time. In an embodiment of the present disclosure, the set of pages are accessed based on a history of access. The set of pages includes an additional amount of data required for future usage. In an embodiment of the present disclosure, the one or more buffer pools 104a decide to keep the set of pages ready for read in memory.

In an embodiment of the present disclosure, the additional amount of data corresponds to a probabilistic amount of data read by the database 108. In an embodiment of the present disclosure, the additional amount of data is read in real time by the database 108. In an embodiment of the present disclosure, the additional data or additional one or more pages are retrieved based on most recently used basis. In an embodiment of the present disclosure, the additional data or additional one or more pages are retrieved based on a history of the data requested by the user.

Going further, the additional one or more pages are retrieved from the database 108 and cached in the one or more buffer pools 104a. In an embodiment of the present disclosure, the one or more buffer pools 104a read the additional one or more pages or data from the database 108 and store them in the memory. Moreover, in an embodiment of the present disclosure, the one or more pages are fetched based on an algorithm. In an embodiment of the present disclosure, the algorithm is based on a probabilistic calculation for selecting the one or more pages that may be required in future.

In addition, the database 108 returns the data requested by the user after fetching the data from the one or more buffer pools 104a. In an embodiment of the present disclosure, the data or the one or more pages containing the data are read from the one or more buffer pools 104a. In an embodiment of the present disclosure, the database 108 stores or keeps the additional one or more data in the memory. In another embodiment of the present disclosure, the database 108 stores or keeps the additional one or more pages in the memory.

Further, in an embodiment of the present disclosure, the one or more buffer pools 104a include a pre-fetcher for pre-fetching the one or more pages in an asynchronous way. The pre-fetcher is configured for initiating the pre-fetching of the one or more pages from the database 108 stored on the data storage device into the corresponding buffer pool of the one or more buffer pools 104a. The pre-fetching is done at any instant of time. Furthermore, the pre-fetcher is configured for pre-fetching the set of pages from the database server based on a pre-determined criterion. Moreover, the pre-determined criterion includes computing a probability score for determining whether the set of pages should be pre-fetched or not (as explained below in the patent application).

Going further, the pre-fetching of the right set of pages is initiated when no request is received from the user for accessing any page. In addition, the pre-fetcher associated with the one or more buffer pools 104a pre-fetches the one or more pages based on a separate page pre-fetch algorithm.

Going further, the pre-fetcher associated with each of the one or more buffer pools 104a is configured for performing one or more operations for predicting a right set of pages from the database 108 for increasing efficiency of the application. In an embodiment of the present disclosure, the one or more operations are performed for increasing speed of access to the one or more pages requested by the user. In another embodiment of the present disclosure, the speed of access is increased by fetching the pages which are relevant and required for the future usage. In addition, each of the one or more buffer pools 104a is associated with one or more extents. In an embodiment of the present disclosure, each of the one or more extents includes eight pages. In another embodiment of the present disclosure, each page in each of the one or more extents has a size of 8 Kb. In an embodiment of the present disclosure, each of the one or more extents has a size of 64 Kb. In an embodiment of the present disclosure, the one or more extents include one or more uniform extents and one or more mixed extents.

In an embodiment of the present disclosure, the pre-fetching includes checking a first plurality of parameters associated with an extent of the one or more extents associated with the corresponding buffer pool of the one or more buffer pools 104a. In addition, the first plurality of parameters includes a size of the extent of the one or more extents and an order associated with the extent of the one or more extents. Furthermore, the size of the extent is checked for ensuring availability of page ids which have not been used and are packed. In an embodiment of the present disclosure, the checking is performed for assuring that the page ids are not too scattered for performing reading operations from the extent of the one or more extents. Moreover, in an embodiment of the present disclosure, the checking is performed for insuring that the page ids are in a sorted order. In another embodiment of the present disclosure, the sorted order is an increasing order.

In addition, the pre-fetcher is configured for predicting the pre-fetching of the one or more pages from the database 108 stored on the data storage device. Moreover, the predicting is performed by checking a preceding sequence of page IDs in the corresponding extent of the one or more extents. Further, the pre-fetcher is configured for calculating the probability score for pre-fetching the one or more pages from the database 108 to the buffer pool of the one or more buffer pools 104a based on a second plurality of parameters. Furthermore, the second plurality of parameters includes a number of the page IDs in the preceding sequence of the size of the extent of the one or more extents and an actual size of the extent of the one or more extents.

Going further, the database management system 100 is configured for enabling a decision for pre-fetching the one or more pages from the database to the buffer pool of the one or more buffer pools based on the calculated probability score. Moreover, the pre-fetcher is configured for fetching the one or more pages asynchronously from the database into the buffer pool of the one or more buffer pools based on the decision.

In an embodiment of the present disclosure, the page is pre-fetched if the preceding sequence of the page ids is good enough. In an embodiment of the present disclosure, the page is pre-fetched if a pattern of read is shown by the pre-fetcher. In another embodiment of the present disclosure, the patter of read must be strong enough for allowing the pre-fetching of the page.

In an embodiment of the present disclosure, the method further includes storing the one or more pages pre-fetched asynchronously from the database into the corresponding buffer pool of the one or more buffer pools 104a. In another embodiment of the present disclosure, the buffer pool of the one or more buffer pools 104a is dynamically selected for storing the pre-fetched one or more pages. In an embodiment of the present disclosure, the fetching further includes allowing the buffer pool of the one or more buffer pools 104a to pre-fetch the one or more pages.

In an embodiment of the present disclosure, the method further includes analyzing an availability of a list. The availability is analyzed for unlocking a page list and a LRU list associated with the corresponding buffer pool of the one or more buffer pools 104a. In addition, the page list and the LRU list are unlocked when the list is available and the analyzing is done repetitively. In an embodiment of the present disclosure, the method further includes updating a read in progress list based on a non-availability of a free list.

In an embodiment of the present disclosure, the one or more pages are pre-fetched into the one or more buffer pools 104a by a synchronous method. In an embodiment of the present disclosure, the pre-fetching of the one or more pages assists in storing the one or more pages in the one or more buffer pools 104a based on a prediction for requirement of the one or more pages in the future. In an embodiment of the present disclosure, the pre-fetcher performs the synchronous pre-fetching of the one or more pages from the database 108.

In an embodiment of the present disclosure, the synchronous pre-fetching of the one or more pages is performed based on past access patterns. In an embodiment of the present disclosure, the synchronous pre-fetching is done for storing a recently accessed data. In another embodiment of the present disclosure, the synchronous pre-fetching of the one or more pages is done for speeding up the application performance. In yet another embodiment of the present disclosure, the synchronous pre-fetching is done for optimizing temporal locality and spatial locality. In an embodiment of the present disclosure, the synchronous pre-fetching of the one or more pages is performed based on a probabilistic calculation of the one or more pages which would be required at a later stage. In an embodiment of the present disclosure, the synchronous pre-fetching is performed for decreasing a seek time.

Moreover, in an embodiment of the present disclosure, the user at the client's end requests the database 108 for reading data. The database 108 searches for the data in the buffer pool of the one or more buffer pools 104a. In addition, the database 108 reads the data from the buffer pool of the one or more buffer pools 104a corresponding to a request of the user. In an embodiment of the present disclosure, the database 108 performs a dynamic task of reading additional data along with the data requested by the user. Further, the additional data is read based on the data requested by the user.

In an embodiment of the present disclosure, the additional data corresponds to a probabilistic amount of data read by the database 108 along with the data requested by the user. In an embodiment of the present disclosure, the additional data is read in real time by the database 108. In an embodiment of the present disclosure, the additional data or additional one or more pages are retrieved based on most recently used basis. In an embodiment of the present disclosure, the additional data or additional one or more pages are retrieved based on relation between the page requested by the user and the one or more pages read additionally.

Going further, the additional pages are retrieved from the database 108 and cached in the one or more buffer pools 104a. In an embodiment of the present disclosure, the one or more buffer pools 104a read the additional pages or data from the database 108 and store them in the memory. Moreover, in an embodiment of the present disclosure, the one or more pages are fetched based on an algorithm. In an embodiment of the present disclosure, the algorithm is based on a probabilistic calculation for selecting the one or more pages that may be required in future.

Further, in an embodiment of the present disclosure, the one or more buffer pools 104a include the pre-fetcher for pre-fetching the one or more pages in a synchronous way. The database 108 is configured for receiving the request from the user in form of the query 102. In addition, the request is received for reading data from the database 108 stored on the data storage device. Moreover, the request being received in real time. Going further, the pre-fetcher is configured for initiating the pre-fetching of the one or more pages from the database 108 stored on the data storage device into the corresponding buffer pool of the one or more buffer pools 104a based on the received request. In addition, the pre-fetching is done by reading the one or more pages from the database 108 to the buffer pool of the one or more buffer pools 104a.

In addition, each of the one or more buffer pools 104a is associated with one or more extents. In an embodiment of the present disclosure, each of the one or more extents includes eight pages. In another embodiment of the present disclosure, each page in each of the one or more extents has a size of 8 Kb. In an embodiment of the present disclosure, each of the one or more extents has a size of 64 Kb. In an embodiment of the present disclosure, the one or more extents include one or more uniform extents and one or more mixed extents.

Going further, the pre-fetcher associated with each of the one or more buffer pools 104a is configured for performing one or more operations for predicting a right set of pages from the database 108 for increasing efficiency of the application (as described below in the patent application). In an embodiment of the present disclosure, the one or more operations are performed for increasing speed of access to the one or more pages requested by the user. In another embodiment of the present disclosure, the speed of access is increased by fetching the pages which are relevant and required for future usage.

Moreover, the pre-fetching includes checking a plurality of parameters associated with an extent of the one or more extents associated with the corresponding buffer pool of the one or more buffer pools 104a. In addition, the plurality of parameters includes the size of the extent of the one or more extents and the order associated with the extent of the one or more extents. Furthermore, the size of the extent is checked for ensuring availability of the page ids which have not been used and are packed. In an embodiment of the present disclosure, the checking is performed for assuring that the page ids are not too scattered for performing reading operations from the extent of the one or more extents. Moreover, in an embodiment of the present disclosure, the checking is performed for insuring that the page ids are in a sorted order. In another embodiment of the present disclosure, the sorted order is an increasing order.

In addition, the pre-fetcher is configured for predicting the pre-fetching of the one or more pages from the database 108 stored on the data storage device. Moreover, the predicting is performed by checking the preceding sequence of the page IDs in the corresponding extent of the one or more extents. Further, the pre-fetcher is configured for calculating the probability score for pre-fetching the one or more pages from the database 108 to the buffer pool of the one or more buffer pools 104a. In an embodiment of the present disclosure, the probability score is calculated for checking whether the preceding sequence of the page ids is good enough for allowing pre-fetching of the page.

Going further, the database management system 100 is configured for enabling a decision for pre-fetching the one or more pages from the database to the buffer pool of the one or more buffer pools 104a based on the calculated probability score. Moreover, the pre-fetcher is configured for fetching the one or more pages synchronously from the database into the buffer pool of the one or more buffer pools based on the decision 104a. In an embodiment of the present disclosure, the page is pre-fetched if the preceding sequence of the page ids is good enough. In an embodiment of the present disclosure, the page is pre-fetched if a pattern of read is shown by the pre-fetcher. In another embodiment of the present disclosure, the patter of read must be strong enough for allowing the pre-fetching of the page.

In an embodiment of the present disclosure, the present method improves the efficiency of the overall I/O which in turn increases the performance of the system.

In an embodiment of the present disclosure, the algorithm for pre-fetching of the pages aligns with the usage pattern. In another embodiment of the present disclosure, the usage pattern is largely random.

In another embodiment of the present disclosure, the algorithm utilized for the pre-fetching is an adaptive algorithm.

In yet another embodiment of the present disclosure, adaptability is achieved by ensuring temporal read and spatial write locality by using the separate page pre-fetch algorithm and cache.

In an embodiment of the present disclosure, the algorithm is a self learning algorithm at run time.

In an embodiment of the present disclosure, technique adopted to achieve the self learning algorithm includes tuning of one or more parameters.

In an embodiment of the present disclosure, the algorithm helps in increasing the probability of finding a random page in the memory.

In an embodiment of the present disclosure, the algorithm helps in improving a response time of the database server.

The present disclosure allows users to build high performance application for huge data or Big data handing. Moreover, the present disclosure helps in achieving high throughput with limited resource and constraints. In addition, the high throughput results in lesser utilization of amount of resources. Further, the present disclosure provides a high database performance. Furthermore, the high performing database enables business to sustain higher number of clients/users and maintain SLA during a high load scenario resulting in uninterrupted business resulting in saving loss of business and money.

FIG. 2 illustrates another flowchart 200 for performing the asynchronous pre-fetching of the one or more pages from the database 108, in accordance with various embodiments of the present disclosure. It may be noted that to explain the process steps of flowchart 200, references will be made to the system elements of FIG. 1. It may also be noted that the flowchart 200 may have lesser or more number of steps for performing the asynchronous pre-fetching of the one or more pages from the database 108.

The flowchart 200 initiates at step 202. At step 204, the pre-fetcher initiates the pre-fetching of the one or more pages from the database 108 stored on the data storage device into the corresponding buffer pool of the one or more buffer pools 104a. The pre-fetching is done at any instant of time. The pre-fetching is performed by checking the first plurality of parameters associated with the extent of the one or more extents associated with the corresponding buffer pool of the one or more buffer pools 104a, predicting the pre-fetching of the one or more pages from the database 108 stored on the data storage device and calculating the probability score for pre-fetching the one or more pages from the database 108 to the buffer pool of the one or more buffer pools 104a based on the second plurality of parameters. The second plurality of parameters includes the number of the page IDs in the preceding sequence of the size of the extent of the one or more extents and the actual size of the extent of the one or more extent. At step 206, the pre-fetcher enables the decision for pre-fetching the one or more pages from the database 108 to the buffer pool of the one or more buffer pools 104a based on the calculated probability score. At step 208, the pre-fetcher fetches the one or more pages asynchronously from the database 108 into the buffer pool of the one or more buffer pools 104a based on the decision. The flowchart 200 terminates at step 210.

FIG. 3 illustrates a flowchart 300 for performing the asynchronous pre-fetching of the one or more pages from the database 108, in accordance with an embodiment of the present disclosure. It may be noted that to explain the process steps of flowchart 300, references will be made to the system elements of FIG. 1. It may also be noted that the flowchart 300 may have lesser or more number of steps for performing the asynchronous pre-fetching of the one or more pages from the database 108.

The flowchart 300 initiates at step 302. Following step 302, at step 304, the I/O file lock is taken. At step 306, the LRU lock is taken. Further, at step 308, the page list lock is taken. At step 310, a list size for the pre-fetching is computed. Following step 310, at step 312, it is checked whether the database 108 should pre-fetch the list or not. If the database 108 does not pre-fetch, the process is transferred to step 334. At step 334, the page list is unlocked. Further, at step 336, the LRU list is unlocked.

If the database 108 pre-fetches the list, the process is continued at step 314. At step 314, it is checked whether the list is available or not. If the list is available, the process is transferred to step 334. If the list is not available, the process continues at step 316. Moreover, at step 316, the first list is obtained from repo. At step 318, it is checked whether the database 108 has enough free list or not. If the database 108 does not have enough free list, the process is transferred to step 338. At step 338, read in progress list is updated. Following step 338, at step 340, PC lock state is set as IN PROG. The process is transferred to step 320 after completion of step 340.

If the database 108 has enough free list, the process continues at step 320. At step 320, the page list is unlocked. At step 322, the LRU list is unlocked. At step 324, it is checked whether the list is null or not. If the list is not null, the process continues at step 326. At step 326, the page list is read. At step 328, the page list is locked. At step 330, an addition is done in the page list. At step 332, the read in progress list is updated. If the list is null, the process is transferred to step 342. At step 342, the file I/O is unlocked. Following step 342, at step 344, number of pages read is returned.

At step 346, the page list state is set to NOT_IN_PROG. At step 348, the page list is broadcasted. Further, at step 350, the page list is unlocked. At step 352, the available list size is computed. After step 352, the process is transferred back to step 342.

FIG. 4 illustrates a flowchart 400 for performing the synchronous pre-fetching of the one or more pages from the database 108, in accordance with various embodiments in the present disclosure. It may be noted that to explain the process steps of flowchart 400, references will be made to the system elements of FIG. 1. It may also be noted that the flowchart 400 may have lesser or more number of steps for performing the synchronous pre-fetching of the one or more pages from the database.

The flowchart 400 initiates at step 402. At step 404, the database 108 receives the request from the user in the form of the query 102. The request is received for reading data from the database 108 stored on the data storage device. The request is received in real time. At step 406, the pre-fetcher initiates the pre-fetching of the one or more pages from the database 108 stored on the data storage device into the corresponding buffer pool of the one or more buffer pools 104a based on the received request. The pre-fetching is performed by checking the plurality of parameters associated with the extent of the one or more extents associated with the corresponding buffer pool of the one or more buffer pools 104a, predicting the pre-fetching of the one or more pages from the database 108 stored on the data storage device and calculating the probability score for pre-fetching the one or more pages from the database 108 to the buffer pool of the one or more buffer pools 104a. At step 408, the pre-fetcher enables the decision for pre-fetching the one or more pages from the database 108 to the buffer pool of the one or more buffer pools 104a. The decision for pre-fetching is taken based on the calculated probability score. At step 410, the pre-fetcher fetches the one or more pages from the database 108 into the buffer pool of the one or more buffer pools 104a based on the decision. The flowchart terminates at step 412.

FIG. 5 illustrates another flowchart 500 for performing the synchronous pre-fetching of the one or more pages from the database 108, in accordance with an embodiment of the present disclosure. It may be noted that to explain the process steps of flowchart 500, references will be made to the system elements of FIG. 1. It may also be noted that the flowchart 500 may have lesser or more number of steps for performing the synchronous pre-fetching of the one or more pages from the database.

The flowchart 500 initiates at step 502. Following step 502, at step 504, the I/O file is locked. At step 506, it is checked whether the pre-fetching is enabled or not. If the pre-fetching is not enabled, then at step 530, a single read flag is set. If the pre-fetching is enabled, then at step 508, a LRU lock is taken. Following step 508, at step 510, a page list lock is taken. Further, at step 512, it is checked whether the prediction is correct or not. If the prediction is not correct, then at step 534, the single read flag is set. Following step 534, the process is transferred to step 524. If the prediction is correct, the process continues to step 514.

At step 514, it is checked if the database 108 wants to read the single data or not. If the database 108 does not want to read the single data, the process is transferred to step 536. At step 536, the size of a list which has to be read is computed. Further, at step 538, it is checked whether the size of the list is less than a current index or not. If the size of the list is less than the current index, the process is transferred to step 516. If the size of the list is less than the current index, the process is transferred to step 534. If the database 108 wants to read the single data, the process continues to step 516. At step 516, it is again checked if the database 108 wants to read the single data or not. If the database 108 does not want to read the single data, the process is transferred to step 540. At step 540, a decision is made whether the database 108 should pre-fetch the list or not. If the database 108 decides to pre-fetch the list, the process is transferred to step 534. If the database 108 does not decide to pre-fetch the list, the process is transferred to step 518. If the database 108 wants to read the single data, the process continues to step 518. At step 518, it is again checked if the database 108 wants to read the single data or not. If the database 108 does not want to read the single data, the process is transferred to step 542.

Further, at step 542, an available list size is set. Further, at step 544, it is checked whether a list header is null or not. If the list header is null, the process is transferred to step 534. If the list header is not null, the process continues at step 546. At step 546, a read in progress list is updated. Further, following step 546, the process is transferred to step 520. At step 520, if the database 108 wants to read the single data, then at step 520, the page list is unlocked. Following step 520, at step 522, the LRU list is unlocked.

At step 524, it is checked whether the database 108 or the user wants to read the single data or not. If the database 108 or the user wants to read the single data, the process is continued at step 526. At step 528, the single data is read by the database 108 or the user. At step 528, number of pages to be read is set to −1. Following step 528, at step 530, the I/O file is unlocked. At step 532, the numbers of pages which have been read are returned.

If the database 108 or the user does not want to read the single data, the process is transferred to step 548. Further, at step 548, a list of pages is read by the database 108 or the user. Following step 548, at step 550, it is checked whether the list is null or not. If the list is null, the process continues at step 552. At step 552, pre-allocated buffers are put back to the free list. After step 552, the process is transferred back to step 528. If the list is not null, the process is transferred to step 554. At step 554, the lock on the page list is taken. At step 556, the read list is added into the page list. Further, at step 558, the read in progress list is updated. At step 560, the page list lock state is set to in progress. At step 562, the page lock information is broadcasted. Further, at step 564, the page list is unlocked. Following step 564, at step 566, the number of pages read is set as list size plus index. After performing step 566, the process is transferred to step 530.

FIG. 6 depicts a block diagram of a computing device 602 for practicing various embodiments of the present disclosure. The computing device 602 includes a control circuitry 604, storage 606, an input/output (“I/O”) circuitry 608 and a communications circuitry 610.

Those skilled in the art would appreciate that the computing device 602 of FIG. 6 may include one or more components which may not be shown here. The computing device 602 includes any suitable type of electronic device. Examples of the computing device 602 include but may not be limited to a digital media player (e.g., an iPod™ made available by Apple Inc. of Cupertino, Calif.), a personal e-mail device (e.g., a Blackberry™ made available by Research in Motion of Waterloo, Ontario), a personal data assistant (“PDA”), a cellular telephone, a Smartphone, a handheld gaming device, a digital camera, a laptop computer, and a tablet computer. In another embodiment of the present invention, the computing device 602 can be a desktop computer.

From the perspective of this invention, the control circuitry 604 includes any processing circuitry or processor operative to control the operations and performance of the computing device 602. For example, the control circuitry 602 may be used to run operating system applications, firmware applications, media playback applications, media editing applications, or any other application. In an embodiment, the control circuitry 604 drives a display and process inputs received from a user interface.

From the perspective of this invention, the storage 606 includes one or more storage mediums including a hard-drive, solid state drive, flash memory, permanent memory such as ROM, any other suitable type of storage component, or any combination thereof. The Storage 606 may store, for example, media data (e.g., music and video files), application data (e.g., for implementing functions on the computing device 602).

From the perspective of this invention, the I/O circuitry 608 may be operative to convert (and encode/decode, if necessary) analog signals and other signals into digital data. In an embodiment, the I/O circuitry 608 may also convert digital data into any other type of signal, and vice-versa. For example, the I/O circuitry 608 may receive and convert physical contact inputs (e.g., from a multi-touch screen), physical movements (e.g., from a mouse or sensor), analog audio signals (e.g., from a microphone), or any other input. The digital data may be provided to and received from the control circuitry 604, the storage 606, or any other component of the computing device 602.

It may be noted that the I/O circuitry 608 is illustrated in FIG. 6 as a single component of the computing device 602; however those skilled in the art would appreciate that several instances of the I/O circuitry 608 may be included in the computing device 602.

The computing device 602 may include any suitable interface or component for allowing a user to provide inputs to the I/O circuitry 608. The computing device 602 may include any suitable input mechanism. Examples of the input mechanism include but may not be limited to a button, keypad, dial, a click wheel, and a touch screen. In an embodiment, the computing device 602 may include a capacitive sensing mechanism, or a multi-touch capacitive sensing mechanism.

In an embodiment, the computing device 602 may include specialized output circuitry associated with output devices such as, for example, one or more audio outputs. The audio output may include one or more speakers built into the computing device 602, or an audio component that may be remotely coupled to the computing device 602.

The one or more speakers can be mono speakers, stereo speakers, or a combination of both. The audio component can be a headset, headphones or ear buds that may be coupled to communications device with a wire or wirelessly.

In an embodiment, the I/O circuitry 608 may include display circuitry for providing a display visible to the user. For example, the display circuitry may include a screen (e.g., an LCD screen) that is incorporated in the computing device 602.

The display circuitry may include a movable display or a projecting system for providing a display of content on a surface remote from the computing device 602 (e.g., a video projector). In an embodiment, the display circuitry may include a coder/decoder to convert digital media data into analog signals. For example, the display circuitry may include video Codecs, audio Codecs, or any other suitable type of Codec.

The display circuitry may include display driver circuitry, circuitry for driving display drivers, or both. The display circuitry may be operative to display content. The display content can include media playback information, application screens for applications implemented on the electronic device, information regarding ongoing communications operations, information regarding incoming communications requests, or device operation screens under the direction of the control circuitry 604. Alternatively, the display circuitry may be operative to provide instructions to a remote display.

From the prospective of this invention, a communications circuitry 610 may include any suitable communications circuitry operative to connect to a communications network and to transmit communications (e.g., voice or data) from the computing device 602 to other devices within the communications network. The communications circuitry 610 may be operative to interface with the communications network using any suitable communications protocol. Examples of the communications protocol include but may not be limited to Wi-Fi, Bluetooth®, radio frequency systems, infrared, LTE, GSM, GSM plus EDGE, CDMA, and quadband.

In an embodiment, the communications circuitry 610 may be operative to create a communications network using any suitable communications protocol. For example, the communications circuitry 610 may create a short-range communications network using a short-range communications protocol to connect to other devices. For example, the communications circuitry 610 may be operative to create a local communications network using the Bluetooth® protocol to couple the computing device 602 with a Bluetooth® headset.

It may be noted that the computing device is shown to have only one communication operation; however, those skilled in the art would appreciate that the computing device 602 may include one more instances of the communications circuitry 610 for simultaneously performing several communications operations using different communications networks. For example, the computing device 602 may include a first instance of the communications circuitry 610 for communicating over a cellular network, and a second instance of the communications circuitry 610 for communicating over Wi-Fi or using Bluetooth®.

In an embodiment, the same instance of the communications circuitry 610 may be operative to provide for communications over several communications networks. In an embodiment, the computing device 602 may be coupled a host device for data transfers, synching the communications device, software or firmware updates, providing performance information to a remote source (e.g., providing riding characteristics to a remote server) or performing any other suitable operation that may require the computing device 602 to be coupled to a host device. Several computing devices may be coupled to a single host device using the host device as a server. Alternatively or additionally, the computing device 602 may be coupled to several host devices (e.g., for each of the plurality of the host devices to serve as a backup for data stored in the computing device 602).

While several possible embodiments of the invention have been described above and illustrated in some cases, it should be interpreted and understood as to have been presented only by way of illustration and example, but not by limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments.

Common forms of non-transitory computer-readable storage medium include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer may read.

The foregoing descriptions of specific embodiments of the present technology have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present technology to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the present technology and its practical application, to thereby enable others skilled in the art to best utilize the present technology and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions and substitutions of equivalents are contemplated as circumstance may suggest or render expedient, but such are intended to cover the application or implementation without departing from the spirit or scope of the claims of the present technology.

Claims

1. A method for pre-fetching one or more pages from a database stored on a data storage device, the one or more pages being pre-fetched in a corresponding buffer pool of one or more buffer pools, the method comprising:

initiating the pre-fetching of the one or more pages from the database stored on the data storage device into the corresponding buffer pool of the one or more buffer pools, wherein the pre-fetching being done at any instant of time,
enabling a decision for pre-fetching the one or more pages from the database to the buffer pool of the one or more buffer pools based on a calculated probability score; and
fetching the one or more pages asynchronously from the database into the buffer pool of the one or more buffer pools based on the decision.

2. The method as recited in claim 1, wherein the pre-fetching comprises:

checking a first plurality of parameters associated with an extent of one or more extents associated with the corresponding buffer pool of the one or more buffer pools, wherein the first plurality of parameters comprises a size of the extent of the one or more extents and an order associated with the extent of the one or more extents;
predicting the pre-fetching of the one or more pages from the database stored on the data storage device, wherein the predicting being performed by checking a preceding sequence of page IDs in the corresponding extent of the one or more extents; and
calculating the probability score for pre-fetching the one or more pages from the database to the buffer pool of the one or more buffer pools based on a second plurality of parameters, wherein the second plurality of parameters comprises a number of the page IDs in the preceding sequence of the size of the extent of the one or more extents and an actual size of the extent of the one or more extents.

3. The method as recited in claim 2, wherein the extent of the one or more extents being associated with the corresponding buffer pool of the one or more buffer pools.

4. The method as recited in claim 2, wherein the order of the extent of the one or more extents being in a sorted increasing order.

5. The method as recited in claim 1, further comprising storing the one or more pages pre-fetched asynchronously from the database into the corresponding buffer pool of the one or more buffer pools.

6. The method as recited in claim 5, wherein the buffer pool of the one or more buffer pools being dynamically selected for storing the pre-fetched one or more pages.

7. The method as recited in claim 1, wherein the fetching further comprises allowing the buffer pool of the one or more buffer pools to pre-fetch the one or more pages.

8. The method as recited in claim 1, further comprising analyzing an availability of a list, wherein the availability being analyzed for unlocking a page list and a LRU list associated with the corresponding buffer pool of the one or more buffer pools, wherein the page list and the LRU list being unlocked when the list being available and wherein the analyzing being done repetitively.

9. The method as recited in claim 1, further comprising updating a read in progress list based on a non-availability of a free list.

10. A method for pre-fetching one or more pages from a database stored on a data storage device, the one or more pages being pre-fetched in a corresponding buffer pool of one or more buffer pools, the method comprising:

receiving a request from a user in form of a query, wherein the request being received for reading data from the database stored on the data storage device and wherein the request being received in real time;
initiating the pre-fetching of the one or more pages from the database stored on the data storage device into the corresponding buffer pool of the one or more buffer pools based on the received request, wherein the pre-fetching being done by reading the one or more pages from the database to the buffer pool of the one or more buffer pools;
enabling a decision for pre-fetching the one or more pages from the database to the buffer pool of the one or more buffer pools, wherein the decision for pre-fetching being taken based on the calculated probability score; and
fetching the one or more pages synchronously from the database into the buffer pool of the one or more buffer pools based on the decision.

11. The method as recited in claim 10, wherein the pre-fetching comprises:

checking a plurality of parameters associated with an extent of one or more extents associated with the corresponding buffer pool of the one or more buffer pools, wherein the plurality of parameters comprises a size of the extent of the one or more extents and an order associated with the extent of the one or more extents;
predicting the pre-fetching of the one or more pages from the database stored on the data storage device, wherein the predicting being performed by checking a preceding sequence of page IDs in the corresponding extent of the one or more extents; and
calculating a probability score for pre-fetching the one or more pages from the database to the buffer pool of the one or more buffer pools.

12. The method as recited in claim 11, wherein the extent of the one or more extents being associated with the corresponding buffer pool of the one or more buffer pools.

13. The method as recited in claim 11, wherein the order of the extent of the one or more extents being in a sorted increasing order.

14. The method as recited in claim 10, further comprising reading the data requested by the user from the corresponding buffer pool of the one or more buffer pools currently storing the data and wherein the data storage device comprises at least one of a hard disk drive and a solid state drive.

15. The method as recited in claim 10, further comprising storing the one or more pages pre-fetched from the database into the corresponding buffer pool of the one or more buffer pools.

16. The method as recited in claim 10, further comprising updating a read in progress list based on a pre-determined criterion, wherein the pre-determined criterion comprises a page list not being null.

17. The method as recited in claim 10, further comprising analyzing whether the database wants to read single data or not, wherein the analyzing being done at regular intervals of time, wherein the analyzing being done when the prediction for the pre-fetching being correct.

18. A computer-program product for pre-fetching one or more pages from a database stored on a data storage device, the one or more pages being pre-fetched in a corresponding buffer pool of one or more buffer pools, comprising:

a computer readable storage medium having a computer program stored thereon for performing the steps of:
initiating the pre-fetching of the one or more pages from the database stored on the data storage device into the corresponding buffer pool of the one or more buffer pools, wherein the pre-fetching being done at any instant of time,
enabling a decision for pre-fetching the one or more pages from the database to the buffer pool of the one or more buffer pools based on a calculated probability score; and
fetching the one or more pages asynchronously from the database into the buffer pool of the one or more buffer pools based on the decision.

19. The computer-program product as recited in claim 18, wherein the pre-fetching comprises:

checking a first plurality of parameters associated with an extent of one or more extents associated with the corresponding buffer pool of the one or more buffer pools, wherein the first plurality of parameters comprises a size of the extent of the one or more extents and an order associated with the extent of the one or more extents;
predicting the pre-fetching of the one or more pages from the database stored on the data storage device, wherein the predicting being performed by checking a preceding sequence of page IDs in the corresponding extent of the one or more extents; and
calculating the probability score for pre-fetching the one or more pages from the database to the buffer pool of the one or more buffer pools based on a second plurality of parameters, wherein the second plurality of parameters comprises a number of the page IDs in the preceding sequence of the size of the extent of the one or more extents and an actual size of the extent of the one or more extents.

20. The computer-program product as recited in claim 18, further comprising storing the one or more pages pre-fetched asynchronously from the database into the corresponding buffer pool of the one or more buffer pools.

Patent History
Publication number: 20160055257
Type: Application
Filed: Aug 18, 2015
Publication Date: Feb 25, 2016
Inventor: Sachin SINHA (Bangalore)
Application Number: 14/829,504
Classifications
International Classification: G06F 17/30 (20060101);