Patents Examined by Qing-Yuan Wu
-
Patent number: 12293210Abstract: A native cloud-based VM in a cloud computing environment of choice is live-mounted without requiring nesting techniques. All operating system data needed to live-mount the VM is accessed over ISCSI. The disclosed technology: supports both Windows and Unix-like (e.g., Linux) operating systems; handles any number of partitions of the root file system without repeated mounting, exporting, and unmounting; and uses only internet protocols (e.g., HTTP, (SCSI) to live-mount the VM. The live-mounted VM gains access to a variety of secondary copies, e.g., made with file-level techniques or with block-level techniques, made within or outside the public cloud computing environment, made from other VMs having the same or a different hypervisor than the public cloud computing environment, and/or made from non-virtualized sources. Access to snapshots as a data source for the live-mounted VM is also disclosed. Thus, a streamlined and source-agnostic technology is disclosed for live-mounting VMs in a public cloud.Type: GrantFiled: March 18, 2022Date of Patent: May 6, 2025Assignee: Commvault Systems, Inc.Inventor: Sanjay Kumar
-
Patent number: 12293231Abstract: Examples described herein include a device interface; a first set of one or more processing units; and a second set of one or more processing units. In some examples, the first set of one or more processing units are to perform heavy flow detection for packets of a flow and the second set of one or more processing units are to perform processing of packets of a heavy flow. In some examples, the first set of one or more processing units and second set of one or more processing units are different. In some examples, the first set of one or more processing units is to allocate pointers to packets associated with the heavy flow to a first set of one or more queues of a load balancer and the load balancer is to allocate the packets associated with the heavy flow to one or more processing units of the second set of one or more processing units based, at least in part on a packet receive rate of the packets associated with the heavy flow.Type: GrantFiled: September 10, 2021Date of Patent: May 6, 2025Assignee: Intel CorporationInventors: Chenmin Sun, Yipeng Wang, Rahul R. Shah, Ren Wang, Sameh Gobriel, Hongjun Ni, Mrittika Ganguli, Edwin Verplanke
-
Patent number: 12293238Abstract: An embedded system running method includes: allocating, according to a resource dynamic allocation rule, a group of services to be allocated to corresponding operating systems in an embedded system, wherein the embedded system includes a first operating system and a second operating system, and a response speed of the first operating system is higher than a response speed of the second operating system; determining resource allocation results corresponding to the group of services to be allocated, where the resource allocation results are used for indicating, among processing resources of the processor, a processing resource corresponding to each of the group of services to be allocated; and allocating the processing resources of a processor to the first operating system and the second operating system according to an operating system allocation result and the resource allocation result corresponding to each of the group of services to be allocated.Type: GrantFiled: April 28, 2023Date of Patent: May 6, 2025Assignee: SUZHOU METABRAIN INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Endong Wang, Jiaming Huang, Baoyang Liu, Chaofan Chen, Wenkai Ma
-
Patent number: 12282803Abstract: A computer system includes a transceiver that receives over a data communications network different types of input data from multiple source nodes and a processing system that defines for each of multiple data categories, a set of groups of data objects for the data category based on the different types of input data. Predictive machine learning model(s) predict a selection score for each group of data objects in the set of groups of data objects for the data category for a predetermined time period. Control machine learning model(s) determine how many data objects are permitted for each group of data objects based on the selection score. Decision-making machine learning model(s) prioritize the permitted data objects based on one or more predetermined priority criteria. Subsequent activities of the computer system are monitored to calculate performance metrics for each group of data objects and for data objects actually selected during the predetermined time period.Type: GrantFiled: January 30, 2024Date of Patent: April 22, 2025Assignee: Nasdaq, Inc.Inventors: Shihui Chen, Keon Shik Kim, Douglas Hamilton
-
Patent number: 12261476Abstract: A method for balancing a transfer function, the transfer function being operable in a reservoir controller to determine a required normalized compensatory charge from a plant to the reservoir during a second time period based on a measured normalized contingency discharge from the reservoir to the plant during a first time period, has the steps: a) read in, to a computer system, a normalized distribution of normalized discharges for a plurality of reference plants in the size class of the plant; and b) automatically adjust, by a digital processor, one or more parameters of the transfer function such that an integral of the product of the transfer function and the normalized distribution of normalized discharges of the plurality of reference plants is about 1. The first time period occurs before the second time period.Type: GrantFiled: October 8, 2021Date of Patent: March 25, 2025Assignee: Applied Underwriters, Inc.Inventors: Justin N. Smith, Mark S. Nowotarski
-
Patent number: 12260241Abstract: A virtualized computing environment includes a plurality of host computers, each host being connected to a physical network and having a hypervisor executing therein. To provision a virtual machine requiring a connection to a virtual network in one of the hosts, a candidate host for hosting the virtual machine, the candidate host having the virtual network configured therein, is selected. A request is then made for a status of the virtual network to the candidate host. The status of the virtual network is then received from the candidate host. If the virtual network is available, then the virtual machine is deployed to the candidate host. If the virtual network is not available, then a second candidate host is selected for hosting the virtual machine.Type: GrantFiled: October 25, 2021Date of Patent: March 25, 2025Assignee: VMware LLCInventors: Chi-Hsiang Su, Sachin Thakkar
-
Patent number: 12254354Abstract: A method for conserving resources in a distributed system includes receiving an event-criteria list from a resource controller. The event-criteria list includes one or more events watched by the resource controller and the resource controller controls at least one target resource and is configured to respond to events from the event-criteria list that occur. The method also includes determining whether the resource controller is idle. When the resource controller is idle, the method includes terminating the resource controller, determining whether any event from the event-criteria list occurs after terminating the resource controller, and, when at least one event from the event-criteria list occurs after terminating the resource controller, recreating the resource controller.Type: GrantFiled: February 5, 2024Date of Patent: March 18, 2025Assignee: Google LLCInventors: Justin Santa Barbara, Timothe Hockin, Robert Bailey, Jeffrey Johnson
-
Patent number: 12242881Abstract: In an approach to intelligent connection placement across multiple logical ports, a mapping table for a virtual machine is created. Responsive to determining that an entry exists in the mapping table for the port on the peer device, whether a virtual function exists the port on the peer device in the mapping table for the same physical function is determined. A virtual function is selected from the mapping table to connect the local port to the port on the peer device.Type: GrantFiled: October 5, 2023Date of Patent: March 4, 2025Assignee: International Business Machines CorporationInventors: Vishal Mansur, Sivakumar Krishnasamy, Niranjan Srinivasan
-
Patent number: 12210913Abstract: Systems and methods are described for the chained execution of a set of code in an on-demand network code execution system. A user may provide a set of code for execution in the on-demand network code execution system and the system may determine that the set of code comprises multiple chained tasks. The system may provide the set of code to a first virtual machine instance for execution of a first task. The system may obtain an indication that the first task has been executed. The results of the execution of the first task may be sent to a second virtual machine instance, via a push or pull, for execution of a second task. Based on identifying that the first task has been executed, the system may instruct the second virtual machine instance to execute the second task.Type: GrantFiled: September 30, 2021Date of Patent: January 28, 2025Assignee: Amazon Technologies, Inc.Inventors: Christopher Kakovitch, Rajesh Kumar Pandey, Arijit Ganguly, Luben Karavelov
-
Patent number: 12197960Abstract: Systems and methods are described for execution of multiple tasks associated with a set of code in an on-demand network code execution system. A user may provide a set of code that is associated with the multiple tasks. The system may generate a first virtual machine instance for execution of a first task. The system may determine that a second task is associated with the first task and may identify a location of the first virtual machine instance. The system may further identify a second virtual machine instance for execution of the second task based on the location of the first virtual machine instance. For example, the system may identify the first virtual machine instance from a plurality of pre-generated virtual machine instances and/or may generate the first virtual machine instance.Type: GrantFiled: September 30, 2021Date of Patent: January 14, 2025Assignee: Amazon Technologies, Inc.Inventors: Christopher Kakovitch, Rajesh Kumar Pandey, Arijit Ganguly, Luben Karavelov
-
Patent number: 12182616Abstract: A platform health engine for autonomous self-healing in platforms served by an Infrastructure Processing Unit (IPU), including: an analysis processor configured to apply analytics to telemetry data received from a telemetry agent of a monitored platform managed by the IPU, and to generate relevant platform health data; a prediction processor configured to predict, based on the relevant platform health data, a future health status of the monitored platform; and a dispatch processor configured to dispatch a workload of the monitored platform to another platform managed if the predicted future health status of the monitored platform is failure.Type: GrantFiled: September 24, 2021Date of Patent: December 31, 2024Assignee: Intel CorporationInventors: Susanne M. Balle, Yamini Nimmagadda, Olugbemisola Oniyinde
-
Patent number: 12175229Abstract: An information handling system may receive a request from a particular remote cluster regarding a cluster scaling event; receive first information from a plurality of other remote clusters indicative of a success or a failure of a corresponding cluster expansion event that was performed at such other remote clusters; receive second information from the plurality of other remote clusters indicative of scores for such other remote clusters in a plurality of metrics; determine, based on the first and second information, a ranking of the metrics based on their criticality to the cluster scaling event; receive third information from the particular remote cluster indicative of scores for the particular remote cluster in the plurality of metrics; and determine a likelihood of success for the cluster scaling event based on the determined ranking of the metrics and the scores for the particular remote cluster in the plurality of metrics.Type: GrantFiled: December 8, 2021Date of Patent: December 24, 2024Assignee: Dell Products L.P.Inventors: Jim Lewei Ji, Tianming Zhang, Edward Guohua Ding
-
Patent number: 12175285Abstract: An integrated circuit for distributing processing tasks includes a pre-selector circuit and a scheduler circuit. The pre-selector circuit is configured to receive a processing task, determine a category of the processing task, and select, from a set of task distribution techniques and based at least in part on the category of the processing task, a task distribution technique for distributing the processing task to a group of processing units. The scheduler circuit is configured to implement the selected task distribution technique to select, from the group of processing units, a target processing unit for performing the processing task.Type: GrantFiled: June 30, 2021Date of Patent: December 24, 2024Assignee: Amazon Technologies, Inc.Inventors: Nitzan Zisman, Said Bshara, Erez Izenberg, Avigdor Segal, Jonathan Cohen, Anna Rom-Saksonov, Leah Shalev, Shadi Ammouri
-
Patent number: 12164956Abstract: Methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to implement thread scheduling for multithreaded data processing environments are disclosed. Example thread schedulers disclosed herein for a data processing system include a buffer manager to determine availability of respective buffers to be acquired for respective processing threads implementing respective functional nodes of a processing flow, and to identify first ones of the processing threads as stalled due to unavailability of at least one buffer in the respective buffers to be acquired for the first ones of the processing threads. Disclosed example thread schedulers also include a thread execution manager to initiate execution of second ones of the processing threads that are not identified as stalled.Type: GrantFiled: June 16, 2021Date of Patent: December 10, 2024Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Kedar Chitnis, Mihir Narendra Mody, Jesse Gregory Villarreal, Jr., Lucas Carl Weaver, Brijesh Jadav, Niraj Nandan
-
Patent number: 12159167Abstract: A method for dynamically assigning an inference request is disclosed. A method for dynamically assigning an inference request may include determining at least one model to process an inference request on a plurality of computing platforms, the plurality of computing platforms including at least one Central Processing Unit (CPU) and at least one Graphics Processing Unit (GPU), obtaining, with at least one processor, profile information of the at least one model, the profile information including measured characteristics of the at least one model, dynamically determining a selected computing platform from between the at least one CPU and the at least one GPU for responding to the inference request based on an optimized objective associated with a status of the computing platform and the profile information, and routing, with at least one processor, the inference request to the selected computing platform. A system and computer program product are also disclosed.Type: GrantFiled: June 29, 2023Date of Patent: December 3, 2024Assignee: Visa International Service AssociationInventors: Hao Yang, Biswajit Das, Yu Gu, Peter Walker, Igor Karpenko, Robert Brian Christensen
-
Patent number: 12153949Abstract: Methods and apparatuses are described for provisioning and managing data orchestration platforms in a cloud computing environment. A server provisions in a first region a first data orchestration platform comprising (i) a first data transformation instance, (ii) first endpoints, and (iii) a first data integration instance. The server provisions in a second region a second data orchestration platform comprising (i) a second data transformation instance, (ii) second endpoints, and (iii) a second data integration instance. The server integrates the first data integration instance and the second data integration instance with an identity authentication service. The server monitors operational status of the first orchestration platform and the second orchestration platform using a monitoring service. The server refreshes virtual computing resources in each of the first orchestration platform and the second orchestration platform using a rehydration service.Type: GrantFiled: September 6, 2023Date of Patent: November 26, 2024Assignee: FMR LLCInventors: Terence Doherty, Saurabh Singh, Aniruththan Somu Duraisamy, Digvijay Narayan Singh, Avinash Mysore Geethananda, Aravind Ganesan
-
Patent number: 12141613Abstract: An apparatus, related devices and methods, having a memory element operable to store instructions; and a processor operable to execute the instructions, such that the apparatus is configured to monitor software and hardware parameters on an electronic device that includes an application designated as a preferred application; determine whether the preferred application is running; detect a change in software or hardware parameters that indicates to reallocate resources to the preferred application; and apply, based on detecting the change in software or hardware parameters, an optimization policy that reallocates resources to a process associated with the preferred application.Type: GrantFiled: March 30, 2021Date of Patent: November 12, 2024Assignee: McAfee, LLCInventors: Raghavendra Satyanarayana Hebbalalu, Raja Sinha, Dattatraya Kulkarni, Partha Sarathi Barik, Srikanth Nalluri, Siddaraya B. Revashetti
-
Patent number: 12131178Abstract: Systems, apparatuses, and methods related to a hypervisor status register in a computer processor are described. For example, a memory coupled to the computer processor can store instructions of routines of predefined, non-hierarchical domains. The computer processor can store a value in the hypervisor status register during a power up process of the computer system. The value stored in the hypervisor status register that identifies whether or not an operating hypervisor is present in the computer system. The computer processor can configure its operations (e.g., address translation) based on the value stored in the hypervisor status register.Type: GrantFiled: November 11, 2022Date of Patent: October 29, 2024Assignee: Micron Technology, Inc.Inventor: Steven Jeffrey Wallach
-
Patent number: 12118389Abstract: Systems and methods for proportional maintenance of complex computing systems. By using proportional maintenance (e.g., recommending/allocating resources based on current usage in the computing system), the systems and methods may scale current resources (e.g., hardware and/or software components) based on how those resources are currently utilized.Type: GrantFiled: May 14, 2024Date of Patent: October 15, 2024Assignee: Citibank, N.A.Inventors: Deepali Tuteja, Girish Wali, Prasanth Babu Madakasira Ramakrishna
-
Patent number: 12112214Abstract: The present disclosure relates to systems, methods, and computer readable media for predicting expansion failures and implementing defragmentation instructions based on the predicted expansion failures and other signals. For example, systems disclosed herein may apply a failure prediction model to determine an expansion failure prediction associated with an estimated likelihood that deployment failures will occur on a node cluster. The systems disclosed herein may further generate defragmentation instructions indicating a severity level that a defragmentation engine may execute on a cluster level to prevent expansion failures while minimizing negative customer impacts. By uniquely generating defragmentation instructions for each node cluster, a cloud computing system can minimize expansion failures, increase resource capacity, reduce costs, and provide access to reliable services to customers.Type: GrantFiled: July 19, 2023Date of Patent: October 8, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Shandan Zhou, Saurabh Agarwal, Karthikeyan Subramanian, Thomas Moscibroda, Paul Naveen Selvaraj, Sandeep Ramji, Sorin Iftimie, Nisarg Sheth, Wanghai Gu, Ajay Mani, Si Qin, Yong Xu, Qingwei Lin