Resource Allocation Patents (Class 718/104)
  • Patent number: 10942775
    Abstract: Embodiments includes a computer-implemented method, a system and computer-program product for modifying central serialization of requests in multiprocessor systems. Some embodiments includes receiving an operation requiring resources from a pool of resources, determining an availability of the pool of resources required by the operation, and selecting a queue of a plurality of queues to queue the operation based at least in part on the availability of the pool of resources. Some embodiments also include setting a resource needs register and needs register for the selected queue, and setting a take-two bit for the selected queue.
    Type: Grant
    Filed: March 1, 2019
    Date of Patent: March 9, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Arun Iyengar
  • Patent number: 10942784
    Abstract: A method, computer system, and a computer program product for resource scaling is provided. The present invention may include receiving a request for resources. The present invention may include receiving a request for a plurality of resources from a virtual device. The present invention may then include estimating a resource allocation based on a predetermined level of service based on the received request. The present invention may also include estimating a benefit curve of a workload for a plurality of tiers of resources based on the estimated resource allocation. The present invention may further include estimating a performance cost of the workload for the plurality of tiers of resources based on the estimated benefit curve.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: March 9, 2021
    Assignee: International Business Machines Corporation
    Inventors: Ian R. Finlay, Christian M. Garcia-Arellano, Adam J. Storm, Gennady Yakub
  • Patent number: 10942760
    Abstract: Various embodiments, methods, and systems for implementing a predictive rightsizing system are provided. Predicted rightsized deployment configurations are generated for virtual machine “VM” deployments having deployment configurations that are modified to predicted rightsized deployment configurations based on a prediction engine. In operation, a VM deployment, associated with a request to deploy one or more VMs on a node, is accessed at a predictive rightsizing controller. A predicted resource utilization for the VM deployment is generated at the prediction engine and accessed at the predictive rightsizing controller. The predicted resource utilization is generated based on a prediction engine that uses past behaviors and features associated with previous VM deployments. Based on the predicted resource utilization, a predicted rightsized deployment configuration is generated for the VM deployment.
    Type: Grant
    Filed: August 28, 2018
    Date of Patent: March 9, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Marcus Felipe Fontoura, Ricardo Gouvea Bianchini, Girish Bablani
  • Patent number: 10943041
    Abstract: An electronic system-level parallel simulation method by means of a multi-core computer system, comprising the parallel evaluation of a plurality of concurrent processes of the simulation on a plurality of cores of the computer system and comprising a sub-method of detection of conflicts of access to a shared memory of a simulated electronic system, the sub-method being implemented by a simulation kernel executed by the computer system and comprises: a step of construction of an oriented graph representative of access to the shared memory by the processes evaluated by the concurrent processes; and a step of detection of loops in the graph; a loop being considered representative of a conflict of access to the shared memory. A computer program product for implementing such a method is provided.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: March 9, 2021
    Assignee: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES
    Inventors: Nicolas Ventroux, Tanguy Sassolas
  • Patent number: 10938989
    Abstract: The present invention is a system and method of continuous sentiment tracking and the determination of optimized agent actions through the training of sentiment models and applying the sentiment models to new incoming interactions. The system receives conversations comprising incoming interactions and agent actions and determines customer sentiment on a micro-interaction level for each incoming interaction. Based on interaction types, the system correlates the determined sentiment with the agent action received prior to the sentiment determination to create and train sentiment models. Sentiment models include agent action recommendations for a desired sentiment outcome. Once trained, the sentiment models can be applied to new incoming interactions to provide CSRs with actions that will yield a desired sentiment outcome.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: March 2, 2021
    Assignee: Verint Americas Inc.
    Inventor: Michael Johnston
  • Patent number: 10929162
    Abstract: A computer implemented method manages execution of applications within a memory space of a multi-tenant virtual machine (MVM). The method includes instantiating a container for an application. The container has a thin client and a name space that is part of a memory space of the MVM. Threads of the application are moved from the MVM to the container. The threads are executed using the thin client in the name space of the container.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: February 23, 2021
    Assignee: Futurewei Technologies, Inc.
    Inventors: Haichuan Wang, Chao Wang, Wei Tang, Fei Yang, Kai-Ting Amy Wang, Man Pok Ho
  • Patent number: 10929175
    Abstract: This disclosure describes techniques that include establishing a service chain of operations that are performed on a network packet as a sequence of operations. In one example, this disclosure describes a method that includes storing, by a data processing unit integrated circuit, a plurality of work unit frames in a work unit stack representing a plurality of service chain operations, including a first service chain operation, a second service chain operation, and a third service chain operation; executing, by the data processing unit integrated circuit, the first service chain operation, wherein executing the first service chain operation generates operation data; determining, by the data processing unit integrated circuit and based on the operation data, whether to perform the second service chain operation; and executing, by the data processing unit integrated circuit, the third service chain operation after skipping the second service chain operation.
    Type: Grant
    Filed: November 21, 2018
    Date of Patent: February 23, 2021
    Assignee: Fungible, Inc.
    Inventors: Rajan Goyal, Satyanarayana Lakshmipathi Billa
  • Patent number: 10931594
    Abstract: A unified distributed real-time quota limitation system limits use of shared networked resources in distributed networked environment (DNE). A dispatch center determines an amount of shared resources available to client devices in the DNE. The dispatch center determines an amount of the shared resources to allocate for use by the clients, and sends the clients one or more policies having a resource usage quota that limits the amount of the resource that the client can use. When a client receives a request to perform a task that requires a shared resource, before running the task, the client determines its own usage of the resource and terminates the task if running the task will exceed the quota limit of the shared resource.
    Type: Grant
    Filed: August 28, 2018
    Date of Patent: February 23, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: QingXiao Zheng, Yi Wang, JingRong Zhao, LanJun Liao, Haitao Li
  • Patent number: 10915518
    Abstract: A computing system may include a database disposed within a remote network management platform that manages a managed network, and server device(s) associated with the platform and configured to: transmit, to a third-party computing system, a request for general information identifying computing resources of the third-party computing system assigned to the managed network; receive, from the third-party computing system, a response indicating resource names and types of the resources that were identified; based on the response, determine that a first resource is of a first type, and responsively store, in the database, a first representation that has just data fields containing the general information from the response that identifies the first resource; and based on the response, determine that a second resource is of a second type, and responsively store, in the database, a second representation that has data fields arranged to contain specific information about the second resource.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: February 9, 2021
    Assignee: ServiceNow, Inc.
    Inventors: Hail Tal, Yuval Rimar, Asaf Garty
  • Patent number: 10915419
    Abstract: An industrial control apparatus that causes a common processing unit to execute a first execution task for executing processing that does not depend on the number of pieces of data and a second execution task for executing processing that depends on the number of pieces of data, and an assistance apparatus are included. The industrial control apparatus calculates a control load amount of a processing unit incurred by executing the first execution task, and extracts the second execution task from the first and second execution tasks. The assistance apparatus calculates a processing load amount of the processing unit according to the type of the extracted second execution task for the number of pieces of analysis data, and using this processing load amount and the control load amount, calculates a margin of processing that indicates a degree of remaining processing capability of the processing unit.
    Type: Grant
    Filed: October 18, 2018
    Date of Patent: February 9, 2021
    Assignee: OMRON CORPORATION
    Inventor: Yuki Ueyama
  • Patent number: 10915362
    Abstract: A computer system includes a plurality of task processing nodes capable of executing tasks and a task management node which determines which a task processing node to allocate a new task and each task processing nodes includes a memory capable of caching data to be used by an allocation task which is a task allocated to the task processing node. The task management node stores task allocation history information including a correspondence relationship between the allocation task and the respective task processing node A CPU of the task management node determines a degree of similarity between the new task and the allocated task, determines the task processing node to which the new task should be allocated from the task processing nodes included in the task allocation history information based on the degree of similarity, and allocates the new task.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: February 9, 2021
    Assignee: HITACHI, LTD.
    Inventors: Kazumasa Matsubara, Mitsuo Hayasaka
  • Patent number: 10915812
    Abstract: In a method of managing a plurality of computing paths in an artificial neural network (ANN) driven by a plurality of heterogeneous resources, resource information, preference level metrics, and a plurality of initial computing paths are obtained by performing an initialization. The resource information represents information associated with the heterogeneous resources. The preference level metrics represent a relationship between the heterogeneous resources and a plurality of operations. The initial computing paths represent computing paths predetermined for the operations. When a first event including at least one of the plurality of operations is to be performed, a first computing path for the first event is set based on the initial computing paths, the preference level metrics, resource environment, and operating environment. The resource environment represents whether the heterogeneous resources are available.
    Type: Grant
    Filed: July 28, 2017
    Date of Patent: February 9, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Seung-Soo Yang
  • Patent number: 10908940
    Abstract: A virtual server system includes multiple pools of server components connected via a high-speed communication fabric and a dynamic virtual server manager configured to determine attributes of a workload in multiple workload dimensions and configure a virtual server using server components of the server component pools. The selected server components implement a virtual server configured based on the determine workload attributes in the multiple workload dimensions. Also, the dynamic virtual server manager dynamically adjusts which server components are used to implement the virtual server based on changes in workload attributes.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: February 2, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Munif M. Farhan, Ahmed Mohammed Shihab, Darin Lee Frink
  • Patent number: 10908933
    Abstract: The claimed subject matter includes techniques for providing access to a cloud-based service. An example method includes generating a user interface form compatible with an application program, the user interface form being displayable via a user interface of the application program. The method also includes receiving a user request at a brokerage engine that runs as an extension of the application program through the user interface form. The method further includes identifying a cloud-based service to perform the request, and performing the request using the cloud-based service by translating the request to be compatible with the identified cloud-based service. The method also includes generating a new user interface form integrated with the user interface of application program, the new user interface form populated with data from the cloud-based service.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: February 2, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Nir Levy, Lee Stott, Justin R. Garrett, Alexander Pshul, Moaid Hathot, Libi Axelrod, Irit Kantor, Itay Yaffe, Amichai Vaknin
  • Patent number: 10908965
    Abstract: Implementations of the present disclosure relate to a method, apparatus and computer program product for processing a computing task. According to one example implementation of the present disclosure, there is provided a method for processing a computing task, comprising: in response to usage of multiple computing resources indicating that at least one part of computing resources among the multiple computing resources are used, determining a direction of a communication ring between the at least one part of computing resources; in response to receiving a request for processing the computing task, determining the number of computing resources associated with the request; and based on the usage and the direction of the communication ring, selecting from the multiple computing resources a sequence of computing resources which satisfy the number to process the computing task. Other example implementations include an apparatus for processing a computing task and a computer program product thereof.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: February 2, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Junping Zhao, Kun Wang, Jinpeng Liu
  • Patent number: 10901764
    Abstract: In certain embodiments, a computer-implemented method includes accessing user selection data that includes selections associated with computing system resources, determining two or more machine image layers from available machine image layers to instantiate on a particular computing system resource, and determining that a particular machine image layer is not cached locally on one or more memory devices of the particular computing system resource. The method includes, in response to determining that the particular machine image layer is not cached locally on one or more memory devices, accessing a stored copy of the particular machine image layer residing in memory external to the particular computing system resource and caching the stored copy of the particular machine image layer on the one or more memory devices of the particular computing system resource. The method further includes instantiating the particular machine image layer on the particular computing system resource.
    Type: Grant
    Filed: October 13, 2014
    Date of Patent: January 26, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Kevin A. Tegtmeier, Eden G. Adogla, Kent D. Forschmiedt
  • Patent number: 10901797
    Abstract: Techniques for allocating resources including receiving a first sub-stream of a data stream associated with a job and determining a dependency of a plurality of stages of the job. The techniques further include determining a metric for a second sub-stream of the data stream, where processing of the second sub-stream is completed and the metric indicates information associated with the processing of the second sub-stream. The techniques further include allocating resources for processing the first sub-stream based at least in part on the metric and the dependency.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Qing Xu, Yi Shan Jiang, Zhi Xiong Pan, Ting Ting Wen
  • Patent number: 10901795
    Abstract: A data processing method includes receiving a request to perform a calculation, identifying, based on the request, data items needed to perform the calculation and retrieving the data items from a data store, storing, in memory, the items, generating graphs for the calculation, wherein each graph comprises one or more nodes, each node comprising instructions to perform at least a portion of the calculation and at least one data item needed by the portion of the calculation, executing each of the graphs to generate a result for the calculation by traversing the graph and processing each node using the instructions of the node and the at least one data item of the node, wherein executing is performed without accessing the data store, and storing, in the data store, the result.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: January 26, 2021
    Assignee: Xactly Corporation
    Inventors: Gowri Shankar Ravindran, Prashanthi Ramamurthy, Kandarp Mahadev Desai
  • Patent number: 10896053
    Abstract: Virtual machine (VM) proliferation may be reduced by determining the availability of existing VMs to perform a task. Tasks may be assigned to existing VMs instead of creating a new VM to perform the task. Furthermore, a coordinator may determine a grouping of VMs or VM hosts based on one or more factors associated with the VMs or the VM hosts, such as VM type or geographical location of the VM hosts. The coordinator may also assign one or more Virtual Server Agents (VSAs) to facilitate managing the group of VM hosts. In some embodiments, the coordinators may facilitate load balancing of VSAs during operation, such as during a backup operation, a restore operation, or any other operation between a primary storage system and a secondary storage system.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: January 19, 2021
    Assignee: Commvault Systems, Inc.
    Inventors: Rajiv Kottomtharayil, Rahul S. Pawar, Ashwin Gautamchand Sancheti, Sumer Dilip Deshpande, Sri Karthik Bhagi, Henry Wallace Dornemann, Ananda Venkatesha
  • Patent number: 10891170
    Abstract: In an approach to grouping related tasks, one or more computer processors receive a first task initialization by a first user. The one or more computer processors determine whether one or more additional tasks contained in one or more task groups are in use by the first user. Responsive to determining one or more additional tasks contained in one or more task groups are in use, the one or more computer processors determine whether the first task is related to at least one task of the one or more additional tasks. Responsive to determining the first task is related to at least one task of the one or more additional tasks, the one or more computer processors add the first task to the task group containing the at least one related task of the one or more additional tasks.
    Type: Grant
    Filed: July 20, 2016
    Date of Patent: January 12, 2021
    Assignee: International Business Machines Corporation
    Inventors: Volker M. Boenisch, Reinhard Buendgen, Franziska Geisert, Jakob C. Lang, Mareike Lattermann, Angel Nunez Mencias
  • Patent number: 10893120
    Abstract: Embodiments for data caching and data-aware placement for machine learning by a processor. Data may be cached in a distributed data store to one or more local compute nodes of cluster of nodes with the cached data. A new job may be scheduled, according to cache and data locality awareness, on the one or more local compute nodes with the cached data needed for execution.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: January 12, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Seetharami R. Seelam, Andrea Reale, Christian Pinto, Yiannis Gkoufas, Kostas Katrinis, Steven N. Eliuk
  • Patent number: 10884808
    Abstract: A method for provisioning a computer includes providing a graph that defines relationships between one or more hardware components of a plurality of computers and component characteristics of the one or more hardware components, and relationships between one or more applications and requirements of the one or more applications. The method further includes receiving a selection of an application and determining, via the graph, whether at least one computer with hardware components capable of meeting the requirements of the application exists. If a computer exits, the method also includes communicating the application to the computer; triggering the computer to execute the application; and communicating, from the computer, data processed by the application to an external system.
    Type: Grant
    Filed: December 8, 2017
    Date of Patent: January 5, 2021
    Assignee: Accenture Global Solutions Limited
    Inventors: Anuraag Chintalapally, Narendra Anand, Srinivas Yelisetty, Michael Giba, Teresa Tung, Carl Dukatz, Colin Puri
  • Patent number: 10884798
    Abstract: A pipeline task verification method and system is disclosed, and may use one or more processors. The method may comprise providing a data processing pipeline specification, wherein the data processing pipeline specification defines a plurality of data elements of a data processing pipeline. The method may further comprise identifying from the data processing pipeline specification one or more tasks defining a relationship between a first data element and a second data element. The method may further comprise receiving for a given task one or more data processing elements intended to receive the first data element and to produce the second data element. The method may further comprise verifying that the received one or more data processing elements receive the first data element and produce the second data element according to the defined relationship.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: January 5, 2021
    Assignee: Palantir Technologies Inc.
    Inventor: Kaan Tekelioglu
  • Patent number: 10884811
    Abstract: Systems and methods are disclosed for scheduling threads on a processor that has at least two different core types, such as an asymmetric multiprocessing system. Each core type can run at a plurality of selectable voltage and frequency scaling (DVFS) states. Threads from a plurality of processes can be grouped into thread groups. Execution metrics are accumulated for threads of a thread group and fed into a plurality of tunable controllers for the thread group. A closed loop performance control (CLPC) system determines a control effort for the thread group and maps the control effort to a recommended core type and DVFS state. A closed loop thermal and power management system can limit the control effort determined by the CLPC for a thread group, and limit the power, core type, and DVFS states for the system. Deferred interrupts can be used to increase performance.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: January 5, 2021
    Assignee: Apple Inc.
    Inventors: Jeremy C. Andrus, John G. Dorsey, James M. Magee, Daniel A. Chimene, Cyril de la Cropte de Chanterac, Bryan R. Hinch, Aditya Venkataraman, Andrei Dorofeev, Nigel R. Gamble, Russell A. Blaine, Constantin Pistol
  • Patent number: 10887179
    Abstract: A method of managing the lifecycle of cloud service modeled as a topology includes, with a processor, generating a topology, the topology representing a cloud service, associating a number of lifecycle management actions (LCMAs) with a number of nodes within the topology, associating a number of policies with a number of nodes within the topology, the policies guiding the lifecycle management of the nodes, and with a lifecycle management engine, executing the topology.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: January 5, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Stephane Herman Maes
  • Patent number: 10884983
    Abstract: The invention relates to header and trailer record validation for batch files. According to an embodiment of the present invention, a computer implemented system implements a Header/Trailer Validation Tool. The Header/Trailer Validation Tool may read a control file containing pertinent information of each file to be validated. Key information may be determined at run time and the Header/Trailer Validation Tool may process the files dynamically. An embodiment of the present invention may also process any number of files in a single execution—this is particularly useful because in some cases a set of files may be received from another application, but not all the files may be used at the same time.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: January 5, 2021
    Assignee: JPMorgan Chase Bank, N.A.
    Inventor: Robert A. Winiarski
  • Patent number: 10884777
    Abstract: An apparatus extracts, for each virtual machine, a first time at which a first live migration has been performed and a first time-interval that has been taken for the first live migration, from log information storing events of the first live migration. The apparatus extracts, for each virtual machine, load information from a load history in which the load information including a CPU usage rate and a memory usage amount is stored at predetermined intervals for each virtual machine, and generates a model that predicts a second time-interval to be taken for a second live migration expected to be performed for each virtual machine, from the load information at the first time and the first time-interval. Upon receiving an instruction for predicting the second time-interval, the apparatus predicts the second time-interval from the model, and provides the predicted second time-interval to be taken for the second live migration.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: January 5, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Fumi Iikura, Yukihiro Watanabe
  • Patent number: 10884779
    Abstract: An illustrative embodiment disclosed herein is a host device including a plurality of virtual machines and a controller virtual machine configured to compute a plurality of central processing unit (CPU) usages corresponding to the plurality of virtual machines. The controller virtual machine is further configured to compute a total usage as a sum of the plurality of CPU usages and to flag one or more outlier virtual machines of the plurality of virtual machines responsive to one or more exceeding CPU usages of the one or more outlier virtual machines being greater than a threshold usage. The controller virtual machine is further configured to assign weights to the one or more outlier virtual machines and to select, for virtual machine migration, a first outlier virtual machine of the one or more outlier virtual machines responsive to the total usage being greater than a target usage.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: January 5, 2021
    Assignee: NUTANIX, INC.
    Inventors: Abhishek Kumar, Prerna Saxena, Ramashish Gaurav
  • Patent number: 10877801
    Abstract: In an embodiment, a method for scheduling tasks comprises at a task scheduler of a processing node, the processing node being a part of a processing group of a plurality of processing groups: retrieving a first task descriptor from a local memory, the task descriptor corresponding to a task scheduled for execution at the current time and comprising at least a task execution time, a frequency for performing the task, and a task identifier; determining whether the task descriptor is assigned to the processing group associated with the task scheduler for execution; if it is determined that the task descriptor is assigned to the processing group associated with the task scheduler for execution: determining whether the task descriptor is assigned to the task scheduler for execution; if it is determined that the task descriptor is assigned to task scheduler for execution: executing the task: updating the task execution time based on the current task execution time and the frequency for performing the task; and re-q
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: December 29, 2020
    Assignees: ATLASSIAN PTY LTD., ATLASSIAN INC.
    Inventors: Alexander Else, Haitao Li
  • Patent number: 10877823
    Abstract: The present disclosure is directed to an in-memory communication infrastructure for an asymmetric multiprocessing system without an external hypervisor, and includes one or more processors and one or more computer-readable non-transitory storage media comprising instructions that, when executed by the one or more processors, cause one or more components to perform operations including identifying data for transmission from a first instance to a second instance, writing, by the first instance, the data into a first ring of a shared memory, the first ring configured as a first transmit ring for the first instance, sending an inter-processor interrupt to the second instance to alert the second instance of the data written into the first ring, reading, by the second instance, the data from the first ring, the first ring configured as a first receive ring for the second instance, and transmitting the data to an application of the second instance.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: December 29, 2020
    Assignee: Cisco Technology, Inc.
    Inventors: Nivin Lawrence, Sandesh K. Rao, Manikandan Veerachamy, Amit Chandra, Tushar Sinha, Manoj Kumar, David W. Duffey
  • Patent number: 10871991
    Abstract: At least one processor of a storage system comprises a plurality of cores and is configured to execute a first thread on a first core of the plurality of cores. The first thread polls at least one interface for an indication of data and, responsive to a detection of an indication of data, processes the data. Responsive to the first thread having no remaining data to be processed, the first thread suspends execution on the first core. The at least one processor is further configured to execute a second thread of a second type on a second core of the plurality of cores. The second thread polls the at least one interface for an indication of data to be processed by the first thread. Responsive to a detection of an indication of data, the second thread causes the first thread to resume execution on the first core.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: December 22, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Amitai Alkalay, Lior Kamran, Eldad Zinger
  • Patent number: 10871996
    Abstract: A system configured to implement detection, modeling and application of memory bandwidth patterns may predict performance of multi-threaded applications with different thread placements. Additionally, such a system may model bandwidth requirements of an application based on placement of the application's threads and may generate a bandwidth signature by sampling performance counters while executing the application using specific thread placement and determining values for multiple classes of bandwidth, such as static, local, per-thread and interleaved. Performance counters may information such as elapsed time, number of instructions executed, and/or the volume of data read or written to each memory bank. A bandwidth signature may be used to apply bandwidth requirements to differing thread placements within various types of systems, such as performance prediction systems, data structure libraries, as well as debugging and development systems.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: December 22, 2020
    Assignee: Oracle International Corporation
    Inventors: Daniel J. Goodman, Timothy L. Harris, Roni T. Haecki
  • Patent number: 10866978
    Abstract: Techniques to response to respond to user requests using natural-language machine learning based on branching example conversations are described.
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: December 15, 2020
    Assignee: FACEBOOK, INC.
    Inventors: Martin Jean Raison, Willy Blandin, Andreea-Loredana Crisan, Stepan Parunashvili, Kemal El Moujahid, Laurent Nicolas Landowski
  • Patent number: 10866792
    Abstract: Systems and methods are provided for managing datasets and source code of a deployment pipeline. A system obtains a deployment pipeline being associated with one or more datasets and source code, and obtains one or more deployment pipeline cleaning rules. The system applies the one or more deployment pipeline cleaning rules to the deployment pipeline, to identify issues the one or more datasets and issues associated with the source code, and cause generation of a graphical user interface indicating identified issues.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: December 15, 2020
    Assignee: Palantir Technologies Inc.
    Inventor: Luke Tomlin
  • Patent number: 10862774
    Abstract: A device receives, from a cloud computing environment, application usage information associated with application instances in the cloud computing environment for an application, and processes the application usage information, with a machine learning model, to determine behavior patterns and predicted tasks for the application. The device determines a modified quantity of the application instances based on the behavior patterns and the predicted tasks for the application, and causes the modified quantity of the application instances to be implemented in the cloud computing environment based on one or more rules. The device stores information associated with the modified quantity of the application instances in a data structure, and updates the machine learning model based on the information associated with the modified quantity of the application instances stored in the data structure.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: December 8, 2020
    Assignee: Capital One Services, LLC
    Inventors: Ravi Kiran Palamari, Ragupathi Subburasu, Mayur Gupta
  • Patent number: 10860306
    Abstract: A patching module being executed by a computing device may select a node comprising a first set of instances of a first module, a second set of instances of a second module, and infrastructure software used by the first and second set of instances. The patching module may apply an infrastructure patch to a copy of the infrastructure software to create patched infrastructure. The patching module may stop the first and second set of instances using the infrastructure software and restart the first and second set of instances to use the patched infrastructure. The patching module may determine that a module patch applies to the first module, copy the first module, and apply the module patch to the copy to create a patched module. The patching module may stop the first set of instances of the first module and start a third set of instances of the patched module.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: December 8, 2020
    Assignee: Dell Products L.P.
    Inventors: Pratheek Velusamy, Janardhana Korapala
  • Patent number: 10860362
    Abstract: Methods and apparatus are disclosed that deploy a hybrid workload domain. An example apparatus includes a resource discoverer to determine whether a first bare metal server is available and a resource allocator to allocate virtual servers for a virtual server pool based on an availability of the virtual servers and, when the first bare metal server is available, allocate the first bare metal server for a bare metal server pool. The example apparatus further includes a hybrid workload domain generator to generate, for display in a user interface, a combination of die virtual server pool and the bare metal server pool and generate a hybrid workload domain used to run a user application based on a user selection in a user interface, the hybrid workload domain including virtual servers from the virtual server pool and bare metal servers from the bare metal server pool.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: December 8, 2020
    Assignee: VMware, Inc.
    Inventors: Naren Lal, Ranganathan Srinivasan
  • Patent number: 10862748
    Abstract: A system for identifying networked equipment on which to operate a computer software application has a store of currently available capability information associated with equipment connected to a communication network, a store of software application operational requirement information and an intent driven controller. The controller operates to identify networked equipment on which to run the software application by comparing application requirement information to currently available capability information associated with instances of the networked equipment looking for capability that satisfies the operational requirements.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: December 8, 2020
    Assignee: OPTUMI, Inc.
    Inventor: Denis Deruijter
  • Patent number: 10860374
    Abstract: In one embodiment, a system comprises platform logic comprising a plurality of processor cores and resource allocation logic. The resource allocation logic may receive a processing request and direct the processing request to a processor core of the plurality of processor cores, wherein the processor core is selected based at least in part on telemetry data associated with the platform logic, the telemetry data indicating a topology of at least a portion of the platform logic.
    Type: Grant
    Filed: September 26, 2015
    Date of Patent: December 8, 2020
    Assignee: Intel Corporation
    Inventors: James Robert Hearn, Patrick Connor, Kapil Sood, Scott P. Dubai, Andrew J. Herdrich
  • Patent number: 10853144
    Abstract: According to examples, an apparatus may include a processor and a memory on which are stored machine readable instructions that when executed by the processor, cause the processor to identify a plurality of tasks, identify a plurality of resources configured to execute the tasks, and decompose the plurality of tasks into multiple groups of tasks based on a plurality of rules applicable to the multiple groups of tasks. The instructions may also cause the processor to, for each group in the multiple groups of tasks, model the group of tasks and a subset of the plurality of resources as a respective resource allocation problem and assign a respective node of a plurality of nodes to solve the resource allocation problem.
    Type: Grant
    Filed: March 9, 2018
    Date of Patent: December 1, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Rekha Nanda, Michael V. Ehrenberg, Yanfang Shen, Malvika Malge
  • Patent number: 10853102
    Abstract: The present disclosure includes systems and methods for providing popups, including the following computer-implemented method. A trigger event is received that is generated by detection of a request for a presentation of a pop-up window. Based on the received trigger event, an activity pop-up component is launched that is configured to output the pop-up window, where a launch mode of the activity pop-up component is preconfigured as a single task mode. A determination is made whether the pop-up window output by the activity pop-up component is obscured by a pre-existing pop-up window. Upon determining that the pop-up window output by the activity pop-up component is obscured by the pre-existing pop-up window, the activity pop-up component is relaunched to trigger movement of the pop-up window to the top of an activity stack to force a non-obscured display of the pop-up window.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: December 1, 2020
    Assignee: Advanced New Technologies Co., Ltd.
    Inventors: Xiangyu Zhao, Liangzi Ding
  • Patent number: 10853833
    Abstract: An information handling system includes a first processor that provides a shopping system and a second processor that provides a purchasing system. The shopping system includes a deal page that displays a coupon associated with a product and that receives a request to purchase the product from a purchaser, wherein the coupon provides a deal to the purchaser for the purchase of the product, and wherein the coupon is provided based upon a limit, and a coupon allocator that allocates the coupon to the purchaser in response to the request, and in further response to a first determination that the limit is not exceeded. The purchasing system includes a purchase page that displays a purchase of the product by the purchaser in response to the request, and a coupon redeemer that redeems the coupon for the product in response to a second determination that the limit is not exceeded.
    Type: Grant
    Filed: January 27, 2016
    Date of Patent: December 1, 2020
    Assignee: Dell Products, L.P.
    Inventors: Harsh N. Acharya, Rajesh B. Kaimal, Matthew Hinze, Parth Narendra Acharya, David M. Gardner
  • Patent number: 10855743
    Abstract: A system for marking and transferring data of interest includes an interface and a processor. The interface is configured to receive an indication to mark data of interest. The processor is configured to: determine whether to generate a transfer request for the data of interest based at least in part on a viewing likelihood estimate, and in response to a determination to generate the transfer request for the data of interest, generate the transfer request for the data of interest.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: December 1, 2020
    Assignee: Lytx, Inc.
    Inventors: Jeremiah Todd Wittevrongel, Stephen Krotosky
  • Patent number: 10853137
    Abstract: Techniques are described herein for allocating and rebalancing computing resources for executing graph workloads in manner that increases system throughput. According to one embodiment, a method includes receiving a request to execute a graph processing workload on a dataset, identifying a plurality of graph operators that constitute the graph processing workload, and determining whether execution of each graph operator is processor intensive or memory intensive. The method also includes assigning a task weight for each graph operator of the plurality of graph operators, and performing, based on the assigned task weights, a first allocation of computing resources to execute the plurality of graph operators. Further, the method includes causing, according to the first allocation, execution of the plurality of graph operators by the computing resources, and monitoring computing resource usage of graph operators executed by the computing resources according to the first allocation.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: December 1, 2020
    Assignee: Oracle International Corporation
    Inventors: Vlad Ioan Haprian, Iraklis Psaroudakis, Alexander Weld, Oskar Van Rest, Sungpack Hong, Hassan Chafi
  • Patent number: 10853531
    Abstract: The exemplary embodiments of the invention provide at least a method, apparatus and system to perform operations including receiving context data from an electronic device, causing, at least in part based on the received context data, an identification of at least one context model compatible with the electronic device, and causing, at least in part, provision of the electronic device with the at least one compatible context model. In addition, the exemplary embodiments of the invention further provide at least a method, apparatus and system to perform operations including causing, at least in part, a provision of context data associated with an electronic device to a context inference service, in response, receiving a context model from the context inference service, and causing adaptation of the received context model as a current context model of the electronic device.
    Type: Grant
    Filed: November 2, 2011
    Date of Patent: December 1, 2020
    Assignee: Nokia Technologies Oy
    Inventors: Xueying Li, Huanhuan Cao, Jilei Tian
  • Patent number: 10853148
    Abstract: Migrating workloads between a plurality of execution environments, including: identifying, in dependence upon on characteristics of a workload, one or more execution environments that can support the workload; determining, for each execution environment, costs associated with supporting the workload on the execution environment; selecting, in dependence upon the costs associated with supporting the workload on each the execution environments, a target execution environment for supporting the workload; and executing the workload on the target execution environment.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: December 1, 2020
    Assignee: Pure Storage, Inc.
    Inventors: Chadd Kenney, Farhan Abrol, Lei Zhou, Yi-Chin Wu, Apoorva Bansal
  • Patent number: 10841175
    Abstract: Novel tools and techniques are provided for implementing model driven service state machine linkage functionality amongst different machines and/or networks. In some embodiments, a computing system of a first network associated with a first entity might establish a communication link with a node of a second network associated with a second entity. The computing system might determine whether there is a common network resource state schema between the two networks, and, if so, might identify available versions, then negotiate which version to use as common version. The computing system might retrieve network state information for the two networks, might generate a user interface that incorporates and presents the network state information for the two disparate networks consistent with the common version of the common schema, and might send the user interface to a user device of a user for display of the network state information of the two disparate networks.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: November 17, 2020
    Assignee: CenturyLink Intellectual Property LLC
    Inventor: Michael K. Bugenhagen
  • Patent number: 10838480
    Abstract: A multiple graphics processing unit (GPU) based parallel graphics system comprising multiple graphics processing pipelines with multiple GPUs supporting a parallel graphics rendering process having an object division mode of operation. Each GPU comprises video memory, a geometry processing subsystem and a pixel processing subsystem. According to the principles of the present invention, pixel (color and z depth) data buffered in the video memory of each GPU is communicated to the video memory of a primary GPU, and the video memory and the pixel processing subsystem in the primary GPU are used to carry out the image recomposition process, without the need for dedicated or specialized apparatus.
    Type: Grant
    Filed: January 24, 2020
    Date of Patent: November 17, 2020
    Assignee: Google LLC
    Inventor: Reuven Bakalash
  • Patent number: 10838622
    Abstract: Embodiments of the present disclosure provide a computer-implemented method and an apparatus for a storage system. The method comprises: in response to receiving a read request of a first container for data in a storage device, obtaining an identifier associated with the read request; searching for metadata of the read request in a metadata set based on the identifier, the metadata recording addressing information of the read request, the metadata set including metadata of access requests for the storage device during a past period; and in response to finding the metadata of the read request in the metadata set, determining, based on the metadata, a cached page of a second container storing the data; and providing the cached page from the second container to the first container to avoid reading the data from the storage device.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: November 17, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Junping Zhao, Fan Guo, Kun Wang
  • Patent number: 10839018
    Abstract: An embodiment of the present invention evaluates plural expressions. A model is generated and configured to evaluate a plurality of expressions each including one or more expression tokens and indicating a data pattern. The model includes a plurality of nodes with one or more of the nodes associated with an expression token and one or more links between the nodes. The links are associated with information indicating each expression including each expression token associated with nodes connected by the links. Data including one or more data tokens is applied to the model. The nodes of the model are traversed over one or more corresponding links based on the one or more data tokens within the data corresponding to expression tokens associated with the nodes. Expressions corresponding to the data are determined based on the expressions associated with the one or more corresponding links.
    Type: Grant
    Filed: November 15, 2017
    Date of Patent: November 17, 2020
    Assignee: International Business Machines Corporation
    Inventors: Kush Baronj, Praveen Devarao, Trent Gray-Donald