Patents Examined by Kevin X Lu
  • Patent number: 11307898
    Abstract: The present disclosure involves systems, software, and computer implemented methods for resource allocation and management. One example method includes receiving a request, including a first application priority, to run a task for an application. At least one second application priority is identified. A maximum number of parallel tasks per application priority is determined. Application priority weights are assigned to the first application priority and the second application priorities. Application priority divisors are determined, for the first application priority and the second application priorities, based on a respective application priority weight and a number of currently running applications of a respective application priority. A number of parallel tasks for the first application and other applications are determined based on the maximum number of allowable parallel tasks per application, an overall divisor, and a respective application priority weight.
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: April 19, 2022
    Assignee: SAP SE
    Inventors: Alain Gauthier, Martin Parent, Edgar Lott
  • Patent number: 11301142
    Abstract: The current document is directed to an efficient and non-blocking mechanism for flow control within a multi-processor or multi-core processor with hierarchical memory caches. Traditionally, a centralized shared-computational-resource access pool, accessed using a locking operation, is used to control access to a shared computational resource within a multi-processor system or multi-core processor. The efficient and non-blocking mechanism for flow control, to which the current document is directed, distributes local shared-computational-resource access pools to each core of a multi-core processor and/or to each processor of a multi-processor system, avoiding significant computational overheads associated with cache-controller contention-control for a traditional, centralized access pool and associated with use of locking operations for access to the access pool.
    Type: Grant
    Filed: June 6, 2016
    Date of Patent: April 12, 2022
    Assignee: VMware, Inc.
    Inventor: Adrian Marinescu
  • Patent number: 11250357
    Abstract: In an embodiment, described herein is a system and method for creating a suggested task set to meet a target value. A cloud server, in response to receiving a request specifying a target value, retrieves completed task sets from a database. Each completed task set includes a same set of task categories. The cloud server derives a number of ratios from the retrieved completed task sets, including a composition ratio and a conversion rate for each task category, and an addition ratio for the number of completed task sets. Based on the derived ratios and the specified target value, the cloud server constructs the suggested task set, and displays in real-time the suggested task set together with current values for the task categories. The cloud server alerts users of a discrepancy between a current value and the corresponding suggested value for a task category when the discrepancy reaches a predetermined level.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: February 15, 2022
    Assignee: CLARI INC.
    Inventors: Xin Xu, Chunyue Du, Xincheng Ma, Kaiyue Wu, Venkat Rangan
  • Patent number: 11226839
    Abstract: A system is provided and includes a plurality of machines. The plurality of machines includes a first generation machine and a second generation machine. Each of the plurality of machines includes a machine version. The first generation machine executes a first virtual machine and a virtual architecture level. The second generation machine executes a second virtual machine and the virtual architecture level. The virtual architecture level provides a compatibility level for a complex interruptible instruction to the first and second virtual machines. The compatibility level is architected for a lowest common denominator machine version across the plurality of machines. The compatibility level includes a lowest common denominator indicator identifying the lowest common denominator machine version.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: January 18, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthias Klein, Bruce Conrad Giamei, Anthony Thomas Sofia, Mark S. Farrell, Scott Swaney, Timothy Siegel
  • Patent number: 11221986
    Abstract: Provided is a data management method capable of deleting intermediate data at an appropriate timing. The data management method in a data analysis system that performs analysis by combining a plurality of input data based on an analysis execution request from a computer includes: a first step, in which a request analysis unit analyzes the analysis execution request from the computer to identify a task, identifies intermediate data generated after execution of each identified task, and generates constraint information that determines whether to delete the identified intermediate data; a second step, in which a task management unit determines whether to delete the intermediate data based on the constraint information for each identified task; and a third step, in which a task execution unit executes the identified task and deletes the intermediate data of the task based on a determination result of the second step.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: January 11, 2022
    Assignee: Hitachi, Ltd.
    Inventors: Jun Mizuno, Yuichi Taguchi, Soichi Takashige
  • Patent number: 11216314
    Abstract: Systems and methods are provided for dynamically reallocating resources during run-time execution of workloads in a distributed accelerator-as-a-service computing system to increase workload execution performance and resource utilization. A workload is executed in the distributed accelerator-as-a-service computing system using an initial set of resources allocated to the executing workload, wherein the allocated resources include accelerator resources (e.g., physical and/or virtual accelerator resources). The performance of the executing workload is monitored to detect a bottleneck condition which causes a decrease in the performance of the executing workload. In response to detecting the bottleneck condition, another set of resources is reallocated to the executing workload, which is determined to reduce or eliminate the bottleneck condition.
    Type: Grant
    Filed: November 2, 2018
    Date of Patent: January 4, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: John S. Harwood, Assaf Natanzon
  • Patent number: 11194620
    Abstract: Systems and methods for preferential treatment of a prioritized virtual machine during migration of a group of virtual machines from a first virtualized computing environment to a second virtualized computing environment. A data structure is allocated to store virtual machine migration task attributes that are associated with a plurality of in-process virtual machine migration tasks. As migration proceeds, the migration task attributes in the data structure are updated to reflect ongoing migration task scheduling adjustments and ongoing migration task resource allotments. A user interface or other process indicates a request to prioritize migration of a particular one of the to-be-migrated virtual machines. Based on the request, at least some of the virtual machine migration task attributes are modified to indicate a reduced scheduling priority of some of the to-be-migrated virtual machine migration tasks so as to preferentially deliver computing resources to the prioritized virtual machine migration tasks.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: December 7, 2021
    Assignee: Nutanix, Inc.
    Inventors: Heiko Friedrich Koehler, Sameer Narkhede, Venkatesh Kothakota
  • Patent number: 11175937
    Abstract: The present disclosure is directed to emulating special-purpose hardware devices using virtual hardware. A process in accordance with various implementations consistent with the present disclosure includes emulating hardware devices using virtual devices of a virtualization system configured to emulate the hardware devices. The process also includes installing in a physical system, instances of the virtualization system including the virtual devices. The process further includes emulating the hardware devices of the physical system using the virtual devices. Additionally, the process includes communicating with equipment of the physical system using the virtual devices.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: November 16, 2021
    Assignee: THE BOEING COMPANY
    Inventors: Jason W. Shelton, Jonathan N. Hotra, Timothy M. Mitchell
  • Patent number: 11163677
    Abstract: Dynamically allocated thread storage in a computing device is disclosed. The dynamically allocated thread storage is configured to work with a process including two or more threads. Each thread includes a statically allocated thread-local slot configured to store a table. Each table is configured to include a table slot corresponding with a dynamically allocated thread-local value. A dynamically allocated thread-local instance corresponds with the table slot.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: November 2, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Igor Ostrovsky, Joseph E. Hoag, Stephen H. Toub, Mike Liddell
  • Patent number: 11150961
    Abstract: Methods, systems and apparatuses for graph processing are disclosed. One graph streaming processor includes a thread manager, wherein the thread manager is operative to dispatch operation of the plurality of threads of a plurality of thread processors before dependencies of the dependent threads have been resolved, maintain a scorecard of operation of the plurality of threads of the plurality of thread processors, and provide an indication to at least one of the plurality of thread processors when a dependency between the at least one of the plurality of threads that a request has or has not been satisfied. Further, a producer thread provides a response to the dependency when the dependency has been satisfied, and each of the plurality of thread processors is operative to provide processing updates to the thread manager, and provide queries to the thread manager upon reaching a dependency.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: October 19, 2021
    Assignee: Blaize, Inc.
    Inventors: Lokesh Agarwal, Sarvendra Govindammagari, Venkata Ganapathi Puppala, Satyaki Koneru
  • Patent number: 11132234
    Abstract: A method, a non-transitory computer-readable storage medium, and a computer system for managing the placement of virtual machines in a virtual machine network are disclosed. In an embodiment, a method involves determining if at least one virtual machine in a set of virtual machines supporting a process and running on a first host computer needs to be separated from other virtual machines in the set. If at least one virtual machine needs to be separated, then at least one virtual machine is selected to be separated based on the number of memory pages changed. The selected VM is then separated from the other virtual machines in the set.
    Type: Grant
    Filed: September 8, 2015
    Date of Patent: September 28, 2021
    Assignee: VMware, Inc.
    Inventors: Kalyan Saladi, Ganesha Shanmuganathan
  • Patent number: 11126462
    Abstract: Systems and methods are disclosures for scheduling code in a multiprocessor system. Code is portioned into code blocks by a compiler. The compiler schedules execution of code blocks in nodes. The nodes are connected in a directed acyclical graph with a top node, terminal node and a plurality of intermediate nodes. Execution of the top node is initiated by the compiler. After executing at least one instance of the top node, an instruction in the code block indicates to the scheduler to initiate at least one intermediary node. The scheduler schedules a thread for execution of the intermediary node. The data for the nodes resides in a plurality of data buffers; the index to the data buffer is stored in a command buffer.
    Type: Grant
    Filed: July 8, 2019
    Date of Patent: September 21, 2021
    Assignee: Blaize, Inc.
    Inventors: Satyaki Koneru, Val G. Cook, Ke Yin
  • Patent number: 11113086
    Abstract: According to one embodiment, a computing device comprises one or more hardware processor and a memory coupled to the one or more processors. The memory comprises software that supports a virtualization software architecture including a first virtual machine operating under control of a first operating system. Responsive to determining that the first operating system has been compromised, a second operating system, which is stored in the memory in an inactive (dormant) state, is now active and controlling the first virtual machine or a second virtual machine different from the first virtual machine that now provides external network connectivity.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: September 7, 2021
    Assignee: FireEye, Inc.
    Inventor: Udo Steinberg
  • Patent number: 11106491
    Abstract: Systems and methods are provided for kernel routine callbacks. Such methods may include hooking a pre-callback handler and a post-callback handler to a pre-existing operating system of a computing device. According to the pre-callback handler, a kernel routine request for a kernel routine to be performed in a kernel mode of the operating system is obtained, whether to allow the kernel routine to be performed is determined, and the kernel routine is caused to be performed in the kernel mode to generate kernel routine results. According to the post-callback handler, whether to allow the kernel routine results of the kernel routine to be returned is determined, and the kernel routine results of the kernel routine is caused to be returned to an application that is executed in a non-kernel mode of the operating system.
    Type: Grant
    Filed: April 6, 2018
    Date of Patent: August 31, 2021
    Assignee: Beijing DIDi Infinity Technology and Development Co., Ltd.
    Inventor: Yu Wang
  • Patent number: 11086874
    Abstract: Management of a virtual infrastructure via an object query language module is described. The virtual infrastructure includes one or more virtual machines, and one or more host machines communicatively coupled with the one or more virtual machines. The virtual infrastructure also includes a centralized management tool communicatively coupled with the one or more host machines. The object query language module fetches information from the one or more host machines and the one or more virtual machines. It further provides commands to the one or more host machines and the one or more virtual machines. In response to the fetch and command of the one or more host machines and the one or more virtual machines, a result of the fetch and command is displayed via a graphical user-interface.
    Type: Grant
    Filed: November 30, 2016
    Date of Patent: August 10, 2021
    Assignee: VMware, Inc.
    Inventor: David Byard
  • Patent number: 11074112
    Abstract: Systems, methods, and software are disclosed herein for maintain the responsiveness of a user interface to an application. In an implementation, a synchronous operation is commenced on a main thread of an application. The application monitors for a request by an additional thread to interrupt the synchronous operation in favor of an asynchronous operation. The synchronous operation is canceled in response to the request and is retried after completing the asynchronous operation.
    Type: Grant
    Filed: January 13, 2017
    Date of Patent: July 27, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Micah James Myerscough, Weide Zhong, Xiaohui Pan, Toshiharu Kawai, Emily Anne Schultz
  • Patent number: 11048562
    Abstract: Techniques are disclosed relating to efficiently handling execution of multiple threads to perform various actions. In some embodiments, an application instantiates a queue and a synchronization primitive. The queue maintains a set of work items to be operated on by a thread pool maintained by a kernel. The synchronization primitive controls access to the queue by a plurality of threads including threads of the thread pool. In such an embodiment, a first thread of the application enqueues a work item in the queue and issues a system call to the kernel to request that the kernel dispatch a thread of the thread pool to operate on the first work item. In various embodiments, the dispatched thread is executable to acquire the synchronization primitive, dequeue the work item, and operate on it.
    Type: Grant
    Filed: December 8, 2017
    Date of Patent: June 29, 2021
    Assignee: Apple Inc.
    Inventors: Daniel A. Steffen, Pierre Habouzit, Daniel A. Chimene, Jeremy C. Andrus, James M. Magee, Puja Gupta
  • Patent number: 11036552
    Abstract: A method and an apparatus of allocating available resources in a cluster system with learning models and tuning methods are provided. The learning model may be trained from historic performance data of previously executed jobs and used to project a suggested amount of resources for execution of a job. The tuning process may suggest a configuration for the projected amount of resources in the cluster system for an optimal operating point. An optimization may be performed with respect to a set of objective functions to improve resource utilization and system performance while suggesting the configuration. Through many executions and job characterization, the learning/tuning process for suggesting the configuration for the projected amount of resources may be improved by understanding correlations of historic data and the objective functions.
    Type: Grant
    Filed: October 25, 2016
    Date of Patent: June 15, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: I-Hsin Chung, Paul G. Crumley, Bhuvana Ramabhadran, Weichung Wang, Huifang Wen
  • Patent number: 11003475
    Abstract: Techniques promote monitoring of hypervisor systems by presenting dynamic representations of hypervisor architectures that include performance indicators. A reviewer can interact with the representation to progressively view select lower-level performance indicators. Higher level performance indicators can be determined based on lower level state assessments. A reviewer can also view historical performance metrics and indicators, which can aid in understanding which configuration changes or system usages may have led to sub-optimal performance.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: May 11, 2021
    Assignee: SPLUNK INC.
    Inventors: Brian Bingham, Tristan Fletcher
  • Patent number: 10963276
    Abstract: A control device can be used to control a base station, a switch, and a gateway leading to an external network. The device may communicate with a connection server connecting to a cloud computer system and with virtual functions of a control plane of the core network as instantiated in the computer system. The device may manage a database identifying for at least one terminal at least one of the virtual functions allocated to that terminal and a database associating at least one of the virtual functions with an identifier and a state of that function, and update the databases on the basis of information received from the connection server and/or from the virtual functions. The device may use one and/or the other of the databases in order to set up and/or maintain a user plane for a terminal between the base station, the switch, and the interconnection gateway.
    Type: Grant
    Filed: August 31, 2015
    Date of Patent: March 30, 2021
    Assignee: ORANGE
    Inventors: Malla Reddy Sama, Lucian Suciu, Amin Aflatoonian, Karine Guillouard