Resource Allocation Patents (Class 718/104)
-
Patent number: 11138038Abstract: The subject technology determines usage history metadata. The subject technology predicts a size value indicating an amount of computing resources to request for executing a set of queries based at least in part on the usage history metadata. The subject technology determines, during a prefetch window of time within a first period of time, a current size of a freepool of computing resources. The subject technology, in response to the current size of the freepool of computing resources being smaller than the predicted size value, sends a request for additional computing resources to include in the freepool of computing resources. The subject technology receives an indication that the request for additional computing resources was granted. The subject technology performs an operation to include the additional computing resources in the freepool of computing resources.Type: GrantFiled: February 11, 2021Date of Patent: October 5, 2021Assignee: Snowflake Inc.Inventors: Qiming Jiang, Orestis Kostakis, Abdul Munir, Prayag Chandran Nirmala, Jeffrey Rosen
-
Patent number: 11140031Abstract: An example controller device that manages a plurality of network devices includes one or more processing units implemented in circuitry and configured to: obtain device-level configuration information from a network device of the plurality of network devices at a first time; determine one or more out-of-band (OOB) configuration changes between the device-level configuration information from the network device and previous device-level intent configuration information compiled from one or more intents maintained by the controller device to manage the plurality of network devices; and store the one or more OOB configuration changes associated with the network device in incremental deltas.Type: GrantFiled: July 26, 2019Date of Patent: October 5, 2021Assignee: Juniper Networks, Inc.Inventors: Jayanthi R, Rahamath Sharif, Chandrasekhar A
-
Patent number: 11138199Abstract: Methods, systems, and computer-readable storage media for receiving workload data, the workload data including queries executed within a distributed database system over a period of time, defining windows, each window including a time slice within the period of time, generating a hypergraph for each window, each hypergraph including vertices and hyperedges and being generated based on a sub-set of queries and weight functions, partitioning each hypergraph into blocks, for each shard in a set of shards, determining a set of ratings, each rating in the set of ratings being based on a weight of a respective share with respect to a respective block, and assigning each shard in the set of shards to a block in the set of blocks based on the set of ratings for the respective shard, the shard being assigned to a block, for which a maximum rating is provided in the set of ratings.Type: GrantFiled: September 30, 2019Date of Patent: October 5, 2021Assignee: SAP SEInventor: Patrick Firnkes
-
Patent number: 11132366Abstract: A query is received at a database execution engine. A query plan including a sub plan structured as a directed acyclic graph is determined by the database execution engine. A set of trees characterizing the sub plan is generated by the database execution engine and using the directed acyclic graph. The set of trees include a first tree and a second tree, the first tree including at least one leaf characterizing a memory store operation and the second tree including a root characterizing a memory access operation. The set of trees are stored for use in execution of the query at run time. Related systems, methods, and articles of manufacture are also described.Type: GrantFiled: April 1, 2019Date of Patent: September 28, 2021Assignee: SAP SEInventors: Manuel Mayr, Till Merker
-
Patent number: 11133989Abstract: Described are methods and systems for automated remediation for networked environments. An example method includes receiving a definition of actions associated with each remediation of issues for a network fleet. The method can further include automatically converting each definition into automated flows in a pipeline language for execution across the network; automatically determining that there is variance between an observed state of the network fleet and a desired state of the network fleet that is causing at least one issue in the network; automatically executing the automated flows to take actions for automatically remediating the variance across the entire network fleet; and receiving feedback after the automatic execution. The automated flows may include flows for alerting and for taking one or more actions across the entire network fleet.Type: GrantFiled: December 20, 2019Date of Patent: September 28, 2021Assignee: Shoreline Software, Inc.Inventors: Anurag Gupta, Charles Ian Ormsby Cary
-
Patent number: 11132380Abstract: Example resource management systems and methods are described. In one implementation, a resource manager is configured to manage data processing tasks associated with multiple data elements. An execution platform is coupled to the resource manager and includes multiple execution nodes configured to store data retrieved from multiple remote storage devices. Each execution node includes a cache and a processor, where the cache and processor are independent of the remote storage devices. A metadata manager is configured to access metadata associated with at least a portion of the multiple data elements.Type: GrantFiled: December 4, 2020Date of Patent: September 28, 2021Assignee: SNOWFLAKE INC.Inventors: Benoit Dageville, Thierry Cruanes, Marcin Zukowski
-
Patent number: 11126541Abstract: Managing resources used during a development pipeline. A method of the disclosure includes analyzing historical resource usage of an application development system during different stages of a development pipeline for an application. The application development system includes a set of computing resources. The method also includes determining a current resource usage for a current stage of the development pipeline for the application. The method further includes determining an estimated resource usage for a later stage of the development pipeline for the application based on one or more of the current resource usage or the historical resource usage. The method further includes configuring the set of computing resources of the application development system for the later stage of the development pipeline based on the estimated resource usage.Type: GrantFiled: May 24, 2018Date of Patent: September 21, 2021Assignee: Red Hat, Inc.Inventors: Benjamin Michael Parees, Clayton Palmer Coleman, Derek Wayne Carr
-
Patent number: 11120432Abstract: A system includes a database, a memory, and a processor. The database stores an account that includes a first, second, and third subaccount. The memory stores a profile specifying a level of anonymization and a level of account access. The processor receives a request for a transaction. The request is associated with the profile. In response, the processor determines a set of subaccounts for the transaction including the first subaccount and the second subaccount. Determining the set of subaccounts for the first transaction includes determining that the profile permits access to the first, second, and third subaccounts, and that the transaction costs associated with the transaction are minimized by using the first and second subaccounts to perform the transaction. The processor additionally generates a virtual account from the set of subaccounts, anonymizes, based on the level of anonymization, the virtual account, and performs the transaction using the anonymized virtual account.Type: GrantFiled: September 30, 2019Date of Patent: September 14, 2021Assignee: Bank of America CorporationInventors: Manu Jacob Kurian, Shiumui Lau Cheng
-
Patent number: 11113171Abstract: Techniques are provided for adaptive resource allocation for workloads with early-convergence detection. One method comprises obtaining a dynamic system model based on a relation between an amount of a resource for multiple workloads and a predefined service metric; obtaining an instantaneous value of the predefined service metric; obtaining an adjustment to the amount of the resource for a given workload based on a difference between the instantaneous value and a target value of the predefined service metric; determining whether the given workload has converged based on an evaluation of one or more predefined convergence criteria; and removing the given workload from a controlled workload list when the given workload has converged. The given workload can be reinserted in the controlled workload list when the given one of the plurality of workloads fails to satisfy a predefined divergence threshold.Type: GrantFiled: August 29, 2019Date of Patent: September 7, 2021Assignee: EMC IP Holding Company LLCInventors: Tiago Salviano Calmon, Eduardo Vera Sousa, VinÃcius Michel Gottin
-
Patent number: 11113111Abstract: Systems, methods, and apparatus for machine learning task compartmentalization and classification are disclosed. An example method comprises receiving, from a first computing device, user defined parameters associated with at least one user, receiving, from a second computing device different from the first computing device, auxiliary data associated with the at least one user, generating, by a third computing device, at least one work profile based on the received user defined parameters and auxiliary data, and determining, by the third computing device, an affinity between a task component and the generated at least one work profile.Type: GrantFiled: July 8, 2019Date of Patent: September 7, 2021Assignee: Bank of America CorporationInventor: Manu Kurian
-
Patent number: 11106540Abstract: A proxy server receives requests from a client computer system and generates corresponding sets of database commands that are capable of fulfilling the requests when submitted to a database server. The proxy server may repeat processing associated with a particular request more than once under different operational conditions in order to improve future performance. In some examples, the proxy server submits a particular database command sequence to the database server using various operational parameters, and measures the performance of each submission to identify a particular set of operational parameters to be applied to the database server with future submissions. In another example, the proxy server determines a number of alternative command sequences that fulfill a particular request, and measures the performance of each of the alternative command sequences to determine how command sequences are generated for future requests.Type: GrantFiled: April 3, 2017Date of Patent: August 31, 2021Assignee: Amazon Technologies, Inc.Inventors: Brian Welcker, Dennis Tighe, Matthew Walters
-
Patent number: 11102228Abstract: One or more computing devices, systems, and/or methods for determining thresholds are provided. For example, first activity associated with a plurality of client devices may be detected. A first activity distribution associated with the plurality of client devices may be determined based upon the first activity. A plurality of peaks of the first activity distribution may be identified. A plurality of gradients associated with pairs of peaks of the plurality of peaks may be determined. A target peak of the plurality of peaks may be determined based upon the plurality of gradients. A threshold amount of activity associated with the first activity may be determined based upon the target peak. A first set of activity associated with a first client device may be detected. A fraudulence label associated with the first client device may be determined based upon the first set of activity and/or the threshold amount of activity.Type: GrantFiled: May 30, 2019Date of Patent: August 24, 2021Assignee: Verizon Media Inc.Inventors: Robert Jason Harris, Ruichen Wang, Helen W. Xie
-
Patent number: 11099877Abstract: Methods, computer program products, and systems are presented. The methods include, for instance: predictively provisioning, by one or more processor, cloud computing resources of a cloud computing environment for at least one virtual machine; and initializing, by the one or more processor, the at least one virtual machine with the provisioned cloud computing resources of the cloud computing environment. In one embodiment, the predictively provisioning may include: receiving historical utilization information of multiple virtual machines of the cloud computing environment, the multiple virtual machines having similar characteristics to the at least one virtual machine; and determining the cloud computing resources for the at least one virtual machine using the historical utilization information of the multiple virtual machines.Type: GrantFiled: June 28, 2019Date of Patent: August 24, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Zhong Qi Feng, Jiang Tao Li, Yi Bin Wang, Chao Yu, Qing Feng Zhang
-
Patent number: 11099897Abstract: A virtual machine memory overcommit system includes an initialization memory, a device memory, at least one processor in communication with the initialization memory and the device memory, a guest operating system (OS) including a device driver, and a hypervisor executing on the at least one processor. The hypervisor is configured to expose the initialization memory to the guest OS of a virtual machine, initialize the guest OS, and expose the device memory to the guest OS. The device driver is configured to query an amount of memory available from the device memory and report the amount of memory available to the guest OS.Type: GrantFiled: September 16, 2019Date of Patent: August 24, 2021Assignee: Red Hat, Inc.Inventors: David Hildenbrand, Michael Tsirkin
-
Patent number: 11100604Abstract: Systems, apparatuses, and methods for scheduling jobs for multiple frame-based applications are disclosed. A computing system executes a plurality of frame-based applications for generating pixels for display. The applications convey signals to a scheduler to notify the scheduler of various events within a given frame being rendered. The scheduler adjusts the priorities of applications based on the signals received from the applications. The scheduler attempts to adjust priorities of applications and schedule jobs from these applications so as to minimize the perceived latency of each application. When an application has enqueued the last job for the current frame, the scheduler raises the priority of the application to high. This results in the scheduler attempting to schedule all remaining jobs for the application back-to-back. Once all jobs of the application have been completed, the priority of the application is reduced, permitting jobs of other applications to be executed.Type: GrantFiled: January 31, 2019Date of Patent: August 24, 2021Assignees: Advanced Micro Devices, Inc., ATI Technologies ULCInventors: Jeffrey Gongxian Cheng, Ahmed M. Abdelkhalek, Yinan Jiang, Xingsheng Wan, Anthony Asaro, David Martinez Nieto
-
Systems and methods for orchestrating seamless, distributed, and stateful high performance computing
Patent number: 11099893Abstract: An orchestration system may provide distributed and seamless stateful high performance computing for performance critical workflows and data across geographically distributed compute nodes. The system may receive a task with different jobs that operate on a particular dataset, may determine a set of policies that define execution priorities for the jobs, and may determine a current state of compute nodes that are distributed across different compute sites. The system may distribute the jobs across a selected set of the compute nodes in response to the current state of the set of compute nodes satisfying more of the execution priorities than the current state of other compute nodes. The system may produce task output based on modifications made to the particular database as each compute node of the set of compute nodes executes a different job of the plurality of jobs.Type: GrantFiled: April 5, 2021Date of Patent: August 24, 2021Assignee: CTRL IQ, Inc.Inventors: Gregory Kurtzer, John Frey, Ian Kaneshiro, Robert Adolph, Cedric Clerget -
Patent number: 11093268Abstract: Embodiments for aggregated information calculation and injection for application containers by one or more processors. Prior to commencing execution of an application inside a working container, a temporary container having an equivalent application template or container template as the working container is started. A first instance of the application is instantiated and executed from inside the temporary container. Relevant information, obtained during the execution of the first application instance from inside the temporary container, and relevant information from a host associated with the application, is extracted. The relevant information from the host and the temporary container is aggregated. A second instance of the application is executed and the aggregated information is injected into the working container.Type: GrantFiled: January 15, 2020Date of Patent: August 17, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Lior Aronovich, Shibin I. Ma
-
Patent number: 11093291Abstract: A resource assignment method, and a recording medium and a distributed processing device applying the same are provided. The resource assignment method includes: when information regarding a plurality of tasks is received from a plurality of first nodes, calculating a size of a resource necessary for each of the received plurality of tasks; and when information regarding an available resource is received from a second node, assigning one of the plurality of tasks to the available resource of the second node, based on the calculated size of the resource necessary for each task.Type: GrantFiled: February 13, 2018Date of Patent: August 17, 2021Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTEInventors: Jae Hoon An, Jae Gi Son, Ji Woo Kang
-
Patent number: 11088810Abstract: The present invention provides a resource management method and system thereof. The resource management method includes: judging whether the variation degree of work state of a communication system will result in the change of resource management information of the communication system or not, if so, then the resource management information is re-collected, wherein the resource management information includes the state, the interference state among links and service stream information relating to each node in the communication system; and determining the resource allocation strategy of the communication system according to the resource management information.Type: GrantFiled: May 10, 2019Date of Patent: August 10, 2021Assignee: SONY CORPORATIONInventor: Xin Guo
-
Patent number: 11086917Abstract: Transient computing clusters can be temporarily provisioned in cloud-based infrastructure to run data processing tasks. Such tasks may be run by services operating in the clusters that consume and produce data including operational metadata. Techniques are introduced for tracking data lineage across multiple clusters, including transient computing clusters, based on the operational metadata. In some embodiments, operational metadata is extracted from the transient computing clusters and aggregated at a metadata system for analysis. Based on the analysis of the metadata, operations can be summarized at a cluster level even if the transient computing cluster no longer exists. Further relationships between workflows, such as dependencies or redundancies, can be identified and utilized to optimize the provisioning of computing clusters and tasks performed by the computing clusters.Type: GrantFiled: February 26, 2020Date of Patent: August 10, 2021Assignee: Cloudera, Inc.Inventors: Sudhanshu Arora, Mark Donsky, Guang Yao Leng, Naren Koneru, Chang She, Vikas Singh, Himabindu Vuppula
-
Patent number: 11087378Abstract: Systems and methods for reserving products, events, or services that have limited availability are provided. A product reservation system may be used to announce the availability of limited availability products. The announcements may be at times unknown to consumers. Consumers may participate in a product drawing session to submit one or more reservation requests for limited availability products being offered during the session.Type: GrantFiled: March 9, 2020Date of Patent: August 10, 2021Assignee: NIKE, Inc.Inventors: Christopher Andon, Hien Tommy Pham, Chase Louis Taylor
-
Patent number: 11088991Abstract: A firewall device comprises a storage unit that stores therein one or more rules related to blocking a request for each of a plurality of WEB servers independently of the rule for another WEB server; a feature-amount calculating unit that calculates a feature amount for each of the WEB servers based on a number of detections with regard to each index in each of the WEB servers; and a rule updating unit that updates a rule stored in the storage unit for each of the WEB servers based on the feature amount calculated by the feature-amount calculating unit.Type: GrantFiled: November 30, 2018Date of Patent: August 10, 2021Assignee: CYBER SECURITY CLOUD, INC.Inventors: Yoji Watanabe, Yusuke Sasaki
-
Patent number: 11086633Abstract: A programmable hardware system for machine learning (ML) includes a core and an inference engine. The core receives commands from a host. The commands are in a first instruction set architecture (ISA) format. The core divides the commands into a first set for performance-critical operations, in the first ISA format, and a second set of performance non-critical operations, in the first ISA format. The core executes the second set to perform the performance non-critical operations of the ML operations and streams the first set to inference engine. The inference engine generates a stream of the first set of commands in a second ISA format based on the first set of commands in the first ISA format. The first set of commands in the second ISA format programs components within the inference engine to execute the ML operations to infer data.Type: GrantFiled: December 19, 2018Date of Patent: August 10, 2021Assignee: Marvell Asia Pte, Ltd.Inventors: Avinash Sodani, Ulf Hanebutte, Senad Durakovic, Hamid Reza Ghasemi, Chia-Hsin Chen
-
Patent number: 11080207Abstract: The present invention is generally directed to a caching framework that provides a common abstraction across one or more big data engines, comprising a cache filesystem including a cache filesystem interface used by applications to access cloud storage through a cache subsystem, the cache filesystem interface in communication with a big data engine extension and a cache manager; the big data engine extension, providing cluster information to the cache filesystem and working with the cache filesystem interface to determine which nodes cache which part of a file; and a cache manager for maintaining metadata about the cache, the metadata comprising the status of blocks for each file. The invention may provide common abstraction across big data engines that does not require changes to the setup of infrastructure or user workloads, allows sharing of cached data and caching only the parts of files that are required, can process columnar format.Type: GrantFiled: June 7, 2017Date of Patent: August 3, 2021Assignee: Qubole, Inc.Inventors: Joydeep Sen Sarma, Rajat Venkatesh, Shubham Tagra
-
Patent number: 11079984Abstract: A system is disclosed. The system includes at least one physical memory device having a plurality of task queues and a processor to receive print data including a plurality of sheetside images, process one or more of the plurality of sheetside images in parallel via nested task queues, the nested task queues including a first task queue associated with a first set of processing threads and second set of task queues, each associated with a second set of processing threads, each task queue in the second set of task queues corresponding to a thread within the first set of processing threads, wherein execution of tasks via the second set of task queues has a higher priority designation than execution of tasks via the first set of processing threads, which are in the first task queue.Type: GrantFiled: September 30, 2019Date of Patent: August 3, 2021Assignee: Ricoh Company, Ltd.Inventors: Walter F. Kailey, Stephen Mandry
-
Patent number: 11074254Abstract: A performance management method, system, and non-transitory computer readable medium for a service for database as a service (DBaaS) in a cloud computing environment, include a receiving and comparing circuit configured to receive a service request from a user and compare the received service request to at least one prior received service request, a similarity calculating circuit configured to calculate a similarity between the service request and the at least one prior received service request based on a requirement that the service request places on the DBaaS, and a data verifying circuit configured to verify whether information within the database of the DBaaS has changed since an identical prior received service request based on the receiving and comparing circuit identifying the identical prior received service request.Type: GrantFiled: March 23, 2016Date of Patent: July 27, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ramya Hari Hara Prasad, Girish Sundaram
-
Patent number: 11074100Abstract: Systems, apparatuses, and methods related to arithmetic and logical operations in a multi-user network are described. An agent may be provisioned with a pool of shared computing resources that includes circuitry to perform operations on data (e.g., one or more posit bit strings) in a multi-user network. The circuitry can perform operations on data to convert the data between one or more formats, such as floating-point and/or universal number (e.g., posit) formats, and can further perform arithmetic and/or logical operations on the converted data. The agent may receive a parameter corresponding to performance of an arithmetic operation and/or a logical operation using one or more posit bit strings and cause performance of the arithmetic operation and/or the logical operation using the one or more posit bit strings.Type: GrantFiled: February 27, 2019Date of Patent: July 27, 2021Assignee: Micron Technology, Inc.Inventor: Vijay S. Ramesh
-
Patent number: 11068317Abstract: Provided is an information processing system that is capable of quickly starting an application program even in a case where a computer resource pool is depleted, and of completing without reducing the quality of service even in the case of an application that is ended in a short amount of time. A management server manages computer resources and a usage state of the computer resource, and in a case where no computer resources is available, and an expected execution time of the application is less than or equal to a reference value, at the time of receiving a degree of parallel and the expected execution time of the application from a user, selects computer resource executing an application having a relatively small degree of deviation from an initial expected execution time, even in a case where a plurality of applications are temporarily simultaneously executed, and arranges and executes the application.Type: GrantFiled: June 15, 2018Date of Patent: July 20, 2021Assignee: HITACHI, LTD.Inventors: Yoshiki Matsuura, Izumi Mizutani
-
Patent number: 11068319Abstract: A first data accessor acquires a lock associated with a critical section. The first data accessor initiates a help session associated with a first operation of the critical section. In the help session, a second data accessor (which has not acquired the first lock) performs one or more sub-operations of the first operation. The first data accessor releases the lock after at least the first operation has been completed.Type: GrantFiled: October 18, 2018Date of Patent: July 20, 2021Assignee: Oracle International CorporationInventors: Yosef Lev, Victor M. Luchangco, David Dice, Alex Kogan, Timothy L. Harris, Pantea Zardoshti
-
Patent number: 11070872Abstract: Provided are a device and a method capable of efficiently performing a synthesis process of broadcast reception data and network reception data. Broadcast reception data received by a receiving device via a communication unit is set as a media source object corresponding to a processing object of an application executed by the receiving device under an application programming interface (API). The application executes a synthesis process of the broadcast reception data and network reception data received via a network as processing for the media source object. The application obtains a time offset corresponding to a time difference between an application time axis and a broadcast time axis on the basis of an API application process to execute a high-accuracy and low-delay data synthesis process.Type: GrantFiled: October 14, 2015Date of Patent: July 20, 2021Assignee: Saturn Licensing LLCInventors: Tatsuya Igarashi, Norifumi Kikkawa, Yoshiharu Dewa, Yasuaki Yamagishi
-
Patent number: 11068359Abstract: Methods and systems for restoring data from a target device are described. According to some embodiments, the method receives a first set of data packets for restore, where the first set of data packets includes a multiplicity of data chunks. The method further captures footprints of the first set of data packets in a cache disk array. In response to receiving an acknowledgement from the cache disk array indicating the footprints of the first set of data packets have been captured, the method pushes each data chunk of the first set of data packets to a construction container for reconstruction of backup data. In response to receiving an acknowledgement from the construction container indicating the data chunk is successfully pushed, the method flushes the respective footprint of the data chunk from the cache disk array.Type: GrantFiled: June 26, 2019Date of Patent: July 20, 2021Assignee: EMC IP HOLDING COMPANY LLCInventors: Mahesh Reddy A V, Chetan Battal, Mahantesh Ambaljeri, Swaroop Shankar DH
-
Patent number: 11061969Abstract: A service provider may provide a plurality of companion instances associated with a mobile device in order to facilitate operation of the mobile device. The companion instances and the mobile device may be configured to execute various components of one or more application. Furthermore, the companion instances may execute various operations on behalf of the mobile device. The operations may be directed to particular companion instances of the plurality of companion instances based on various factors, such as an ability of the particular companion instances to perform the operations.Type: GrantFiled: June 29, 2015Date of Patent: July 13, 2021Assignee: Amazon Technologies, Inc.Inventors: Khawaja Salman Shams, Marco Argenti
-
Patent number: 11064222Abstract: The present disclosure relates to a method for delivering cloud Digital Video Recording (cloud DVR) content by a media server (101). The method comprises receiving one or more inputs (205) related to cloud DVR content from one or more sources for delivering the cloud DVR content to a mobile device (104) connected to a Mobile Edge Computing (MEC) node (102). Further, the method comprises generating a plurality of dynamic vectors (206), using the received one or more inputs (205). Furthermore, the method comprises switching of a recording of the cloud DVR content between the MEC node (102) or a cloud server (103), based on the plurality of vectors and a prediction of a load on the cloud server (103). Thereafter, the method comprises caching the cloud DVR content in one of the MEC node (102) or the cloud server (103) based on the switching to deliver the cloud DVR content to the plurality of mobile devices.Type: GrantFiled: March 11, 2020Date of Patent: July 13, 2021Assignee: Wipro LimitedInventors: Gowrishankar Subramaniam Natarajan, Jagan Mohan Gorti
-
Patent number: 11061896Abstract: A system selects multiple operators in a query graph by determining whether a corresponding value satisfies a threshold for each operator. The system sorts each selected operator in an ascending order based on a corresponding maximum thread capacity and determines an average number of threads of control based on available threads and the selected operators. The system allocates an initial number of threads to an initial selected operator in the ascending order, the initial number based on a minimum of the average number of threads and corresponding maximum thread capacity. The system determines a revised average number of threads based on remaining number of available threads and remaining number of the selected operators and allocates a next number of threads to a next selected operator in the ascending order, the next number based on a minimum of the revised average number of threads and corresponding maximum thread capacity.Type: GrantFiled: July 31, 2018Date of Patent: July 13, 2021Assignee: salesforce.com, inc.Inventor: Seth White
-
Patent number: 11061728Abstract: A system and method for allocating memory to a heterogeneous address space includes identifying, by an operating system, at least one superset feature from an application configured to be executed on a host device. The address space associated with the application includes a plurality of supersets, and wherein the operating system allocates the memory to each of the plurality of supersets from a non-volatile memory or a volatile memory based upon the at least one superset feature.Type: GrantFiled: February 12, 2019Date of Patent: July 13, 2021Assignee: Western Digital Technologies, Inc.Inventors: Viacheslav Dubeyko, Luis Vitorio Cargnini
-
Patent number: 11061740Abstract: A method for enhancing a workload manager for a computer system includes sampling and storing usage of a resource of the computer system as resource usage values, comparing said resource usage values with predetermined performance goal values, assigning a time-stamped priority value to an application that is running based on at least one of the performance goal values by the workload manager, retrieving a portion of the resource usage values and a related portion of the performance goal values for the application, identifying a future workload demand value by applying a time-series analysis algorithm to the resource usage values and the performance goal values for the application resulting in workload demand time frames and related amplitudes of the workload demand time frames, and adjusting a dispatch priority value for the application by setting a minimum dispatch priority for the application based on the future workload demand value.Type: GrantFiled: August 13, 2018Date of Patent: July 13, 2021Assignee: International Business Machines CorporationInventors: Tobias Orth, Dieter Wellerdiek, Norman C. Böwing, Qais Noorshams
-
Patent number: 11061733Abstract: A digital computing system is configured to control access to an accelerator. The system includes a processor that executes an application, and an accelerator that performs a data processing operation in response to an access request output from the application. The system further includes a virtual accelerator switchboard (VAS) to determine an availability of at least one shared credit corresponding to the accelerator and assign an available shared credit to the application. The application submits a request to access the accelerator using the assigned shared credit.Type: GrantFiled: August 30, 2018Date of Patent: July 13, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Brian F. Veale, Bruce Mealey, Andre L. Albot, Nick Stilwell
-
Patent number: 11055146Abstract: A distribution process system includes a first terminal configured to, in accordance with a change relating to a processing load, perform transmission of first information relating to the processing load of the first terminal to a second terminal having a transmission frequency of a message relating to a processing load of the second terminal to a management device higher than a transmission frequency of a message relating to the processing load of the first terminal, the second terminal configured to, in response to receiving the first information, transmit to the management device a first message relating to the processing load of the second terminal and the first information, and the management device configured to manage a load state of each of the first terminal and the second terminal, and update the load state of the first terminal in accordance with the first information in response to receiving the first information.Type: GrantFiled: March 6, 2019Date of Patent: July 6, 2021Assignee: FUJITSU LIMITEDInventors: Takashi Enami, Masanori Yamazaki, Nami Nagata, Hitoshi Ueno
-
Patent number: 11055148Abstract: A method for providing overload protection to a real-time computational engine configured to compute a plurality of values corresponding to a plurality of entities includes, when an overload protector is in an overload state: identifying one or more entities in a normal status and having high corresponding load contributions; downgrading the identified one or more entities; in response to detecting that a load level is below a low threshold, transitioning the overload protector to a recovery state and beginning a cool down period; and, when the overload protector is in the recovery state: upgrading a first group of entities of the one or more downgraded entities to a normal status; determining whether the cool down period has ended; and in response to determining that the cool down period has ended: upgrading all downgraded entities to the normal status.Type: GrantFiled: October 24, 2017Date of Patent: July 6, 2021Inventor: Vitaly Y. Barinov
-
Patent number: 11055438Abstract: In response to a request for launching a program, a list of one or more application frameworks to be accessed by the program during execution of the program is determined. Zero or more entitlements representing one or more resources entitled by the program during the execution are determined. A set of one or more rules based on the entitlements of the program is obtained from at least one of the application frameworks. The set of one or more rules specifies one or more constraints of resources associated with the at least one application framework. A security profile is dynamically compiled for the program based on the set of one or more rules associated with the at least one application framework. The compiled security profile is used to restrict the program from accessing at least one resource of the at least one application frameworks during the execution of the program.Type: GrantFiled: March 4, 2016Date of Patent: July 6, 2021Assignee: Apple Inc.Inventors: Ivan Krstic, Austin G. Jennings, Richard L. Hagy
-
Patent number: 11055142Abstract: Embodiments of the present disclosure may provide dynamic and fair assignment techniques for allocating resources on a demand basis. Assignment control may be separated into at least two components: a local component and a global component. Each component may have an active dialog with each other; the dialog may include two aspects: 1) a demand for computing resources, and 2) a total allowed number of computing resources. The global component may allocate resources from a pool of resources to different local components, and the local components in turn may assign their allocated resources to local competing requests. The allocation may also be throttled or limited at various levels.Type: GrantFiled: October 30, 2020Date of Patent: July 6, 2021Assignee: Snowflake Inc.Inventors: Thierry Cruanes, Igor Demura, Varun Ganesh, Prasanna Rajaperumal, Libo Wang, Jiaqi Yan
-
Patent number: 11048557Abstract: Methods, computer-readable media, and systems are included for generating information about latency ratings corresponding to a memory pool and a CPU pool. An example method includes for each CPU of the CPU pool, estimating a first latency rating for said each CPU towards the memory pool, and for each memory unit of the memory pool, estimating a second latency rating for said each memory unit towards the CPU pool. The CPUs are organized into a first plurality of groups of CPUs based on the estimated first latency rating, where each CPU of each group of the first plurality of groups has a first common latency rating towards the memory pool. The memory units are organized into a second plurality of groups of memory units based on the estimated second latency rating, where each memory unit of each group of the second plurality of groups has a second common latency rating towards the CPU pool.Type: GrantFiled: October 25, 2018Date of Patent: June 29, 2021Assignee: Telefonaktiebolaget LM Ericsson (publ)Inventors: Daniel Turull, Vinay Yadhav
-
Patent number: 11048632Abstract: A method of assigning I/O requests to CPU cores of a data storage system includes, in a first operating mode, assigning I/O requests to CPU cores based on port affinity while maintaining a current I/O completion count, and regularly performing a first test-and-switch operation that includes (i) for a sample interval, temporarily assigning the I/O requests to the CPU cores based on core availability while obtaining a sample I/O completion count, (ii) comparing the first sample I/O completion count to the current I/O completion count, and (iii) based on the sample I/O completion count being greater than the current I/O completion count, switching to a second operating mode. In the second operating mode, I/O requests are assigned to the CPU cores based on core availability, and similar operations are performed for periodically testing whether to switch to the first operating mode.Type: GrantFiled: April 30, 2019Date of Patent: June 29, 2021Assignee: EMC IP Holding Company LLCInventors: Philippe Armangau, Bruce E. Caram, Rustem Rafikov
-
Patent number: 11042527Abstract: Systems and methods are described herein for system critical phase lock job inhibitors. Acquisition of a consistent change exclusive lock is initiated. A job request having a scope object is received. Execution of the job request and generation of a replacement job associated with the job request is prohibited based on the scope object indicating that the job requires consistent change access during the consistent change exclusive lock.Type: GrantFiled: May 15, 2018Date of Patent: June 22, 2021Assignee: SAP SEInventors: Tobias Scheuer, Dirk Thomsen
-
Patent number: 11042415Abstract: Concepts for sharing processing resource of a multi-tenant extract transform load, ETL, system are presented. In such concepts, a total workload of the multi-tenant ETL system is considered along with the queued workload of tenant in order control deliver of the queued workload to the system. Such control is undertaken, for example, by delaying the work of the tenant. Proposed embodiments therefore seek to devise a policy to achieve fairness amongst tenants.Type: GrantFiled: November 18, 2019Date of Patent: June 22, 2021Assignee: International Business Machines CorporationInventors: Alexander Robert Wood, Chengxuan Xing, Doina Liliana Klinger
-
Patent number: 11042654Abstract: Metadata describing access control capabilities of a database technology resource is received from an access control system. Access restrictions for accessing data of the database resource by users of an application that have a role are received from an application developer. A role maintenance user interface is generated, using the metadata, for assigning the role to users of the application. Attribute values for creating an instance of the role for a user are received, using the role maintenance user interface. The instance of the role is created for the user based on the received attribute values and the access restrictions. A request from the application for the user to access the database resource is received by the access control system when the user is logged into the application. The access restrictions are applied by the access control system in the database resource when the database resource is accessed.Type: GrantFiled: December 11, 2018Date of Patent: June 22, 2021Assignee: SAP SEInventors: Kathrin Nos, Michael Engler, Matthias Vogel
-
Patent number: 11036516Abstract: A parallel distributed processing control system used in production distribution planning includes: a storage unit storing step information of steps constituting a production distribution process of a product, CPU information of CPUs that calculate a value of a simulation result for the step, and a constraint value in the production distribution process; a divided model generation unit generating a divided model by grouping the steps; a CPU allocation unit allocating the divided model to the plurality of CPUs; an engine execution unit enabling the CPU to calculate the value for the step constituting the divided model; and a constraint monitoring unit determining whether the value satisfies a condition specified by the constraint value. An output information generation unit generates result information using the value satisfying the condition; and the CPU allocation unit allocates the divided model so that processing loads of the plurality of CPUs are equalized.Type: GrantFiled: June 11, 2019Date of Patent: June 15, 2021Assignee: HITACHI, LTD.Inventors: Atsuki Kiuchi, Tazu Nomoto, Yasuo Bakke, Takahiro Ogura
-
Patent number: 11030169Abstract: Processing and storage responsibility for a data set may be split according to separately stored shards of the data set. As one or more loads associated with shards of the dataset grow a re-sharding operation may be performed to reduce loading of particular shards and nodes that host the particular shards. A re-sharding operation may cause only a sub-set of as set of shards of the dataset to be split and only cause second portions of the split shards to be stored in additional computing nodes. In some embodiments, a number of shards to be included in the sub-set of shards to be split may be selected based on an overall number of shards in the set and a largest number in the Fibonacci sequence that is less than the overall number of shards in the set.Type: GrantFiled: March 7, 2017Date of Patent: June 8, 2021Assignee: Amazon Technologies, Inc.Inventors: Ming-Chuan Wu, Sandeep Bhatia, Andrew Whitaker
-
Patent number: 11030009Abstract: System and methods for automatically scaling compute resources in a compute group. The method includes determining compute capacity required to complete job requests and determining allocable compute capacity available on the compute resources in the compute group. The method further includes calculating a utilization of the compute group based on the required compute capacity and allocable compute capacity and determining whether the calculated utilization is above a first threshold value or below a second threshold value; upon determining that the calculated utilization is above the first threshold value the method calculates a number of compute resources required to bring the utilization below the first threshold value and causes an increase in the number of compute resources in the compute group based on the calculated number. Upon determining that the calculated utilization falls below the second threshold value the method causes a reduction in the number of active compute resources.Type: GrantFiled: March 28, 2019Date of Patent: June 8, 2021Assignees: ATLASSIAN PTY LTD., ATLASSIAN INC.Inventors: Jacob Christopher Joseph Gonzalez, Alexander William Price, David Angot, Nicholas Young
-
Patent number: 11030184Abstract: System and methods for database active monitoring are disclosed. In one embodiment, in an information processing device comprising at least one computer processor, a method for database activity monitoring may include: (1) a database monitor monitoring data from a database system and a user session with the database system; (2) the database monitor comparing the monitored data to at least one threshold; (3) the database monitor executing an automated action in response to the monitored data breaching one of the thresholds; and (4) the database monitor initiating an alert based on the breached threshold.Type: GrantFiled: May 3, 2018Date of Patent: June 8, 2021Assignee: JPMORGAN CHASE BANK, N.A.Inventors: Prakash Konkimalla, Christopher Medved