Resource Allocation Patents (Class 718/104)
-
Patent number: 12293217Abstract: A load balancing method for use in conjunction with an application or service provided by a distributed computing system may begin by electing, from a group of participants, a leader for each of a plurality of tasks associated with the application or service. Responsive to detecting a signal or some other indication to run a particular task, the elected leader of the particular task may delegate responsibility to run the particular task to a particular participant. The particular participant, upon subsequently discovering that responsibility for the particular task has been delegated to it, responds by running the particular task. In some embodiments, the elected leader for a task may delegate responsibility for running the task to a least-loaded participant.Type: GrantFiled: April 19, 2022Date of Patent: May 6, 2025Assignee: Dell Products L.P.Inventors: Pan Xiao, Xuhui Yang
-
Patent number: 12293237Abstract: Apparatuses, methods and storage medium for computing including determination of work placement on processor cores are disclosed herein. In embodiments, an apparatus may include one or more processors, devices, and/or circuitry to identify a favored core of the processor cores. The one or more processors, devices, and/or circuitry may be configured to determine whether to migrate a thread to or from the favored core. In some embodiments, the determination may be by a process executed by a driver and/or by an algorithm executed by a power control unit of the processor.Type: GrantFiled: December 19, 2023Date of Patent: May 6, 2025Assignee: Intel CorporationInventors: Guy M. Therien, Michael D. Powell, Venkatesh Ramani, Arijit Biswas, Guy G. Sotomayor
-
Patent number: 12287756Abstract: A systolic array cell is described, the cell including two general-purpose arithmetic logic units (ALUs) and register-file. A plurality of the cells may be configured in a matrix or array, such that the output of the first ALU in a first cell is provided to a second cell to the right of the first cell, and the output of the second ALU in the first cell is provided to a third cell below the first cell. The two ALUs in each cell of the array allow for processing of a different instruction in each cycle.Type: GrantFiled: October 4, 2023Date of Patent: April 29, 2025Assignee: GOOGLE LLCInventors: Reginald Clifford Young, Trevor Gale, Sushma Honnavara-Prasad, Paolo Mantovani
-
Patent number: 12287867Abstract: A method for operating a computing device for a control unit of a motor vehicle. The computing device including a processor core, and is configured to control an exchange of data between a connectivity zone and a security zone. The security zone includes at least one component which is necessary to drive the vehicle and has an elevated relevance with regard to safety. The connectivity zone including at least one component whose operation requires communication outside of the vehicle but is not required to drive the vehicle and does not have an elevated relevance with regard to safety. At least one first program executable by the computing device is assigned to a non-trustworthy zone, and at least one further program is assigned to a trustworthy zone. The component of the connectivity zone is assigned to the non-trustworthy zone, and the component of the security zone being assigned to the trustworthy zone.Type: GrantFiled: December 16, 2020Date of Patent: April 29, 2025Assignee: ROBERT BOSCH GMBHInventors: Manuel Jauss, Mustafa Kartal, Razvan Florin Aguridan, Roland Steffen
-
Patent number: 12282530Abstract: Systems and methods for secure resource management are provided. A secure resource management system includes a resource record repository, such as a secure database or a blockchain, for storing resource records for resources. The resource records contain information of resource providers, information of resource users having a right to obtain resources, and resource transaction histories. Responsive to a request to verify an authorized user of a resource, the secure resource management system further queries the resource record repository, retrieves the resource record, determines the resource user currently having a right to obtain the resource as the authorized user of the resource, and transmits the verification result in response to the request. The verification result identifies the authorized user of the resource and can be used to grant access to the resource by the authorized user.Type: GrantFiled: July 9, 2020Date of Patent: April 22, 2025Assignee: EQUIFAX INC.Inventors: Rajkumar Bondugula, Michael McBurnett
-
Patent number: 12282517Abstract: Disclosed is a memory system using a heterogeneous data format, which provides a personalized recommendation algorithm function to an internet service user based on a plurality of items, which includes a user preference analyzer for each item that calculates a user preference value corresponding to each item of a analysis target service; and a memory that stores data related to the each item in a first data format or stores data related to the each item in a second data format with required bits less than the first data format, based on the user preference value of the each item.Type: GrantFiled: December 26, 2022Date of Patent: April 22, 2025Assignee: UIF (University Industry Foundation), Yonsei UniversityInventors: Won Woo Ro, Chanyoung Yoo, Hongju Kal
-
Patent number: 12282789Abstract: Embodiments are directed to using remote pods. An intermediary software is instantiated in a worker node virtual machine and is used to cause a pod virtual machine to be created, the pod virtual machine being remote from the worker node virtual machine. An overlay network is established between the intermediary software in the worker node virtual machine and a pod space in the pod virtual machine. The overlay network is used to cause containers to be created in the pod virtual machine, where the worker node virtual machine is configured to use the overlay network to manage communications with the pod virtual machine.Type: GrantFiled: September 7, 2021Date of Patent: April 22, 2025Assignee: International Business Machines CorporationInventors: Qi Feng Huo, Xiaojing Liu, Dan Qing Huang, Lei Li, Da Li Liu, Yuan Yuan Wang, Yan Song Liu
-
Patent number: 12277447Abstract: Systems, methods, and non-transitory, machine-readable media may facilitate adaptive resource capacity prediction and control using cloud infrastructures. Specifications of resource allocations for resources provided by a cloud infrastructure system may be collected. Execution of a series of sets of parallel microservices may be caused. Each set may be a function of a particular type of resource data and may facilitate obtaining resource metrics data corresponding to the particular type. The series of sets may facilitate obtaining resource metrics data mapped to the resources provided by the cloud infrastructure system. Prediction rules may be selected as a function of particular resource metrics. The selected prediction rules may be used to predict resource capacities for a subset of the resources as a function of the particular resource metrics and generate resource capacity predictions. Preemptive actions with respect to incidents identified based on the resource capacity predictions may be facilitated.Type: GrantFiled: July 26, 2024Date of Patent: April 15, 2025Assignee: THE HUNTINGTON NATIONAL BANKInventor: Matthew Bates
-
Patent number: 12276950Abstract: Aspects of the disclosure relate to an intelligent resource evaluation engine. A computing platform may monitor the plurality of RPA machines to detect parameter information. The computing platform may store the parameter information along with corresponding RPA machines as a key value pairs in a database. The computing platform may identify first current parameter information for a first RPA machine using the key value pairs. The computing platform may input the first current parameter information into an intelligent resource evaluation model, which may output first machine selection information for the first RPA machine. Based on identifying that the first RPA machine is sufficient to execute the first robotic automation process, the computing platform may send direct the first RPA machine to execute the first robotic automation process.Type: GrantFiled: June 24, 2022Date of Patent: April 15, 2025Assignee: Bank of America CorporationInventors: Sudhakar Balu, Sri Lakshmi Priya Doraiswamy, Siva Kumar Paini, Nagalaxmi Sama, Sathya Thamilarasan
-
Patent number: 12277104Abstract: The present disclosure relates to a method of authorizing a change in cloud infrastructure performed by an apparatus, including the operations of: changing the assets requested by a development team or a management team using infrastructure as code (IaC); hooking and holding changes of assets; collecting cloud infrastructure information from a cloud environment through an application programming interface (API); visualizing the cloud infrastructure information; visualizing the asset changes; reporting the visualized cloud infrastructure information and the visualized asset changes to a manager via an authorization process; returning or approving the asset changes in the authorization process by the manager; and storing the information of the assets requested by the development team or the management team and the information of the authorization process in a database.Type: GrantFiled: October 24, 2022Date of Patent: April 15, 2025Assignee: TATUM Inc.Inventor: Su Hyun Park
-
Patent number: 12271272Abstract: A method for performing a backup operation, the method comprising receiving a backup operation request for an asset, identifying a queue comprising the plurality of slices, wherein each slice references a separate portion of the asset, sending a first backup request to a proxy manager to instantiate a container for each of a plurality of backup sessions, wherein each backup session corresponds to a slice of the plurality of slices, receiving, from the proxy manager, a notification that one of the number of backup sessions is complete and a corresponding container has been torn down, making a second determination that there is an additional slice on a second queue associated with a second asset, and sending, based on the second determination, a backup request to the proxy manager to instantiate a new container for the additional slice associated with the second asset.Type: GrantFiled: September 22, 2023Date of Patent: April 8, 2025Assignee: Dell Products L.P.Inventors: Upanshu Singhal, Shelesh Chopra, Ashish Kumar
-
Patent number: 12271596Abstract: Techniques for performing effective noise removal for biased machine learning (ML) based optimizations in storage systems. The techniques include serving, by a storage system, an IO workload, identifying, using ML from among a plurality of storage objects subject to the IO workload, storage objects with low temperatures (e.g., cold storage objects) or likely to have low temperatures in the near future, and removing them from subsequent temperature forecasting analysis, effectively treating such cold storage objects as “noise.” The techniques further include performing the temperature forecasting analysis on remaining ones of the plurality of storage objects such as those with high temperatures (e.g., hot storage objects). In this way, temperature forecasting or prediction is performed, using ML, in a biased fashion over a relatively narrow spectrum of storage object temperatures, thereby improving tiering and data prefetching performance, reducing memory and processing overhead, and so on.Type: GrantFiled: August 7, 2023Date of Patent: April 8, 2025Assignee: Dell Products L.P.Inventors: Shaul Dar, Ramakanth Kanagovi, Guhesh Swaminathan, Rajan Kumar
-
Patent number: 12271822Abstract: A method for active learning includes obtaining a set of unlabeled training samples and for each unlabeled training sample, perturbing the unlabeled training sample to generate an augmented training sample. The method includes generating, using a machine learning model, a predicted label for both samples and determining an inconsistency value for the unlabeled training sample that represents variance between the predicted labels for the unlabeled and augmented training samples. The method includes sorting the unlabeled training samples based on the inconsistency values and obtaining, for a threshold number of samples selected from the sorted unlabeled training samples, a ground truth label. The method includes selecting a current set of labeled training samples including each selected unlabeled training samples paired with the corresponding ground truth label. The method includes training, using the current set and a proper subset of unlabeled training samples, the machine learning model.Type: GrantFiled: August 21, 2020Date of Patent: April 8, 2025Assignee: GOOGLE LLCInventors: Zizhao Zhang, Tomas Jon Pfister, Sercan Omer Arik, Mingfei Gao
-
Patent number: 12265849Abstract: The current document is directed a resource-exchange system that facilitates resource exchange and sharing among computing facilities. The currently disclosed methods and systems employ efficient, distributed-search methods and subsystems within distributed computer systems that include large numbers of geographically distributed data centers to locate resource-provider computing facilities that match the resource needs of resource-consumer computing-facilities based on attribute values associated with the needed resources, the resource providers, and the resource consumers. Nested-hypervisor technology is employed, in disclosed implementations, to guarantee data security for, and prevent monitoring of operational states and characteristics of, resource-consumer virtual machines and virtual applications while they execute above leased computational resources in remote computing facilities.Type: GrantFiled: May 16, 2018Date of Patent: April 1, 2025Assignee: VMWare LLCInventors: Daniel James Beveridge, Ricky Trigalo, Joerg Lew
-
Patent number: 12265597Abstract: A system and method for managing resources of a processor is disclosed. In an illustrative embodiment, the method includes accepting a command to execute an application at least in part by the processor, executing the application using the processor, monitoring execution parameters characterizing the execution of the application by the processor, and storing the monitored execution parameters in a memory accessible to processor. In one example, the execution parameters including an identifier of the application and a time at which the application begins execution.Type: GrantFiled: January 25, 2021Date of Patent: April 1, 2025Assignee: ARRIS ENTERPRISES LLCInventors: Santosh Basavaraj Budni, Vinod Jatti, Nithin Raj Kuyyar Ravindranath, Kiran Tovinkere Srinivasan
-
Patent number: 12260266Abstract: A system and method of balancing data storage among a plurality of groups of computing devices, each group comprising one or more respective computing devices. The method may involve determining a compute utilization disparity between the group having a highest level of compute utilization and the group having a lowest level of compute utilization, determining a transfer of one or more projects between the plurality of groups of computing devices that reduces the compute utilization disparity, and directing the plurality of groups of computing devices to execute the determined transfer.Type: GrantFiled: March 10, 2022Date of Patent: March 25, 2025Assignee: Google LLCInventors: Alan Pearson, Yaou Wei
-
Patent number: 12260255Abstract: Intelligent process management is provided. A start time is determined for an additional process to be run on a worker node within a duration of a sleep state of a task of a process already running on the worker node by adding a first defined buffer time to a determined start time of the sleep state of the task. A backfill time is determined for the additional process by subtracting a second defined buffer time from a determined end time of the sleep state of the task. A scheduling plan is generated for the additional process based on the start time and the backfill time corresponding to the additional process. The scheduling plan is executed to run the additional process on the worker node according to the start time and the backfill time corresponding to the additional process.Type: GrantFiled: September 29, 2022Date of Patent: March 25, 2025Assignee: International Business Machines CorporationInventors: Jing Jing Wei, Yue Wang, Shu Jun Tang, Yang Kang, Yi Fan Wu, Qi Han Zheng, Jia Lin Wang
-
Patent number: 12260253Abstract: A machine learning accelerator (MLA) implements a machine learning network (MLN) by using data transfer instructions that coordinate concurrent data transfers between processing elements. A compiler receives a description of a machine learning network and generates the computer program that implements the MLN. The computer program contains instructions that will be run on PEs of the MLA. The PEs are connected by data transfer paths that are known to the compiler. The computations performed by the PEs may require data stored at other PEs. The compiler coordinates the data transfers to avoid conflicts and increase parallelism.Type: GrantFiled: January 23, 2023Date of Patent: March 25, 2025Assignee: SiMa Technologies, Inc.Inventor: Gwangho Kim
-
Patent number: 12255818Abstract: Provided herein are various enhancements for low downtime migration of virtualized software services across different instantiations, which may include real-time migration over different cloud/server providers and platforms, physical locations, network locations, and across different network elements. Examples herein include handling of migration of data and state for virtualized software services, migration of ingress and egress traffic for the software services, and migration of other various operations aspects applicable to virtualized software services. In many instances, a client node retains the same network addressing used to reach the virtualized software services even as the virtualized software services move to different network locations and physical locations.Type: GrantFiled: October 25, 2024Date of Patent: March 18, 2025Assignee: Loophole Labs, Inc.Inventors: Shivansh Vij, Alex Sørlie
-
Patent number: 12248818Abstract: The present application discloses a method, system, and computer system for starting up and maintaining a cluster in a warmed up state, and/or allocating clusters from a warmed up state. The method includes instantiating a set of virtual machines, wherein instantiating the set of virtual machines includes setting a temporary security credential for each virtual machine of the set of virtual machines, receiving a virtual machine allocation request associated with a workspace, a customer, or a tenant, in response to the virtual machine allocation request: allocating a virtual machine, wherein allocating the virtual machine comprises replacing the temporary security credential with a security credential associated with the workspace, the customer, or the tenant.Type: GrantFiled: October 29, 2021Date of Patent: March 11, 2025Assignee: Databricks, Inc.Inventors: Yandong Mao, Aaron Daniel Davidson
-
Patent number: 12244571Abstract: A method, system, and computer program product are disclosed. The method includes generating simulation instances and particle site identifiers from a particle-based simulation. The method also includes providing an order metric for the particle-based simulation. Information is embedded in the particle-based simulation by mapping local order values of the order metric to characters of an input message.Type: GrantFiled: April 7, 2022Date of Patent: March 4, 2025Assignee: International Business Machines CorporationInventors: Fausto Martelli, Malgorzata Jadwiga Zimon
-
Patent number: 12242511Abstract: A method and apparatus for managing a set of storage resources for a set of queries is described. In an exemplary embodiment, a method provisions processing resources of an execution platform and provisions storage resources of a storage platform. The execution platform uses the storage platform, which is shared with the execution platform, to process the set of queries. The method changes a number of the storage resources provisioned for the storage platform based on a storage capacity utilization by the set of queries of the storage resources. The method changes the number of the storage resources independently to a change of the processing resources in the execution platform. The method processes the set of queries using the changed number of the storage resources provisioned for the storage platform.Type: GrantFiled: February 7, 2023Date of Patent: March 4, 2025Assignee: Snowflake Inc.Inventors: Benoit Dageville, Thierry Cruanes, Marcin Zukowski
-
Patent number: 12235774Abstract: Devices and techniques for parking threads in a barrel processor for managing cache eviction requests are described herein. A barrel processor includes eviction circuitry and is configured to perform operations to: (a) detect a thread that includes a memory access operation, the thread entering a memory request pipeline of the barrel processor; (b) determine that a data cache line has to be evicted from a data cache for the thread to perform the memory access operation; (c) copy the thread into a park queue; (d) evict a data cache line from the data cache; (e) identify an empty cycle in the memory request pipeline; (f) schedule the thread to execute during the empty cycle; and (g) remove the thread from the park queue.Type: GrantFiled: February 16, 2024Date of Patent: February 25, 2025Assignee: Micron Technology, Inc.Inventor: Christopher Baronne
-
Patent number: 12236262Abstract: Features are extracted and/or derived from a software package (e.g., a binary executable, etc.) which are input into a machine learning model to determine an estimated peak memory usage required to analyze the software package. A number of memory resource units required for the determined peak memory usage is then determined. If the number of available memory resource units is less than the determined number of required memory resource units, then the software package can be queue in a backoff queue. The determined number of required memory units to analyze the software package can be allocated when a number of available memory resource units equals or exceeds the determined number of required memory resource units (whether or not the software package has been queued). The software package can then be analyzed using the allocated memory units. Information characterizing this analysis can be provided to a consuming application or process.Type: GrantFiled: October 2, 2024Date of Patent: February 25, 2025Assignee: Binarly IncInventors: Alexander Matrosov, Sam Lloyd Thomas, Yegor Vasilenko, Lukas Seidel
-
Patent number: 12236272Abstract: Resource access control modules that are part of an operating system kernel and data structures visible in both user space and kernel space provide for user space-based configuration of computing system resource limits, accounting of resource usage, and enforcement of resource usage limits. Computing system resource limits can be set on an application, customer, or other basis, and usage limits can be placed on various system resources, such as files, ports, I/O devices, memory, and processing unit bandwidth. Resource usage accounting and resource limit enforcement can be implemented without the use of in-kernel control groups. The resource access control modules can be extended Berkeley Program Format (eBPF) Linux Security Module (LSM) programs linked to LSM hooks in the Linux operation system kernel.Type: GrantFiled: November 11, 2021Date of Patent: February 25, 2025Assignee: Intel CorporationInventor: Mikko Ylinen
-
Patent number: 12236290Abstract: The disclosure relates to a method, apparatus and device for sharing microservice application data. The method includes: managing, through data registration management, memory data registration information that is to be loaded by microservice application clusters; determining, according to the memory data registration information, memory data that are required by the microservice application clusters; partitioning and distributing the memory data to a plurality of memory computation service nodes in the microservice application clusters, and deploying the plurality of memory computation service nodes into a corresponding microservice application cluster at a proximal end; and loading the memory data in a preset manner in the plurality of memory computation service nodes, and sharing a corresponding memory computation service node in real time under the condition that the memory data change.Type: GrantFiled: May 16, 2024Date of Patent: February 25, 2025Assignee: INSPUR GENERSOFT CO., LTD.Inventors: Daisen Wei, Weibo Zheng, Yucheng Li, Xiangguo Zhou, Lixin Sun
-
Patent number: 12229596Abstract: A method of storing electronic data performed by a terminal apparatus communicable with an information processing terminal is provided. The method includes: receiving, during a use of a first resource, a notification indicating that reservation of a second resource selected by a user is completed, from the information processing terminal; and in response to receiving the notification indicating that the reservation of the second resource is completed, starting a storing process of storing electronic data output by an electronic device during the use of the first resource.Type: GrantFiled: July 1, 2021Date of Patent: February 18, 2025Assignee: Ricoh Company, Ltd.Inventor: Ken Norota
-
Patent number: 12232061Abstract: Certain aspects of the present disclosure provide techniques for sidelink synchronization in a network. A method that may be performed by a remote user equipment (UE) includes determining at least one synchronization priority associated with synchronization signals for synchronizing to a network, determining relay capability information associated with multiple relay UEs, selecting one relay UE of the multiple relay UEs, based on the synchronization priority and relay capability, and synchronizing to the network using at least one synchronization signal received from the selected one relay UE.Type: GrantFiled: August 20, 2021Date of Patent: February 18, 2025Assignee: QUALCOMM IncorporatedInventors: Kaidong Wang, Jelena Damnjanovic, Sony Akkarakaran, Junyi Li, Tao Luo
-
Patent number: 12229588Abstract: Migrating workloads to a preferred environment, including: predicting, for each of a plurality of environments, a performance load on each of a plurality of environments that would result from placing one or more of a plurality of workloads on the environment; determining a preferred environment for each of the plurality of workloads by determining a placement of each of the plurality of workloads that results in a best fit for the plurality of workloads; and deploying each of the plurality of workloads in the corresponding preferred environment.Type: GrantFiled: November 30, 2021Date of Patent: February 18, 2025Assignee: PURE STORAGEInventors: Robert Barker, Jr., Farhan Abrol
-
Patent number: 12229045Abstract: In some examples, a sensor service receives an indication of interest from a client for sensor data of a first sensor of the plurality of sensors, and allocates buffers in the memory for the plurality of sensors. The sensor service provides a first buffer to a sensor connector that is to receive the sensor data from the first sensor, and receives, from the sensor connector, an indication that the first buffer in the memory has been written with the sensor data from the first sensor. Based on the indication of interest from the client, the sensor service notifies the client that the first buffer is available for reading by the client from the memory.Type: GrantFiled: September 25, 2023Date of Patent: February 18, 2025Assignee: BlackBerry LimitedInventors: Michael Jonathan Mueller, Noel Dylan Dillabough
-
Patent number: 12216628Abstract: A system to identify optimal cloud resources for executing workloads. The system deduplicates historical client queries based on a workload selection configuration to determine a grouping of historical client queries. The system generates a workload based on at least a portion of the grouping of historical client queries. The system repeatedly executes a test run of the workload using resources of a cloud environment to determine whether there is a performance difference in the test run. The system, in response to determining that there is no performance difference, identifies one or more sets of decreased resources of the cloud environment. The system re-executes the test run using the one or more sets of decreased resources of the cloud environment to determine whether there is a performance difference in the test run that is attributed to the one or more sets of decreased resources of the cloud environment.Type: GrantFiled: September 20, 2023Date of Patent: February 4, 2025Assignee: Snowflake Inc.Inventors: Allison Lee, Shrainik Jain, Qiuye Jin, Stratis Viglas, Jiaqi Yan
-
Patent number: 12217090Abstract: An algorithm execution management system of a provider network may receive a request from a user for executing an algorithm using different types of computing resources, including classical computing resources and quantum computing resources. The request may indicate a container that includes the algorithm code and dependencies such as libraries for executing the algorithm. The algorithm execution management system may first determine that the quantum computing resources are available to execute the algorithm, and then cause the classical computing resources to be provisioned. The algorithm execution management system may cause at least one portion of the algorithm to be executed at the classical computing resources using the container indicated by the user, and at least another portion of the algorithm to be executed at the quantum computing resources. The quantum task of the algorithm may be provided a priority during execution of the algorithm for using the quantum computing resources.Type: GrantFiled: November 12, 2021Date of Patent: February 4, 2025Assignee: Amazon Technologies, Inc.Inventors: Milan Krneta, Eric M Kessler, Christian Bruun Madsen
-
Patent number: 12217086Abstract: Techniques are disclosed for chain schedule management for machine learning model-based processing in a computing environment. For example, a method receives a machine learning model-based request and determines a scheduling decision for execution of the machine learning model-based request. Determination of the scheduling decision comprises utilizing a set of one or more scheduling algorithms and comparing results of at least a portion of the set of one or more scheduling algorithms to identify execution environments of a computing environment in which the machine learning model-based request is to be executed. The identified execution environments may then be managed to execute the machine learning model-based request.Type: GrantFiled: February 25, 2022Date of Patent: February 4, 2025Assignee: Dell Products L.P.Inventor: Victor Fong
-
Patent number: 12217838Abstract: There are provided a method and an apparatus for distributing physical examination information, an electronic device, a computer-readable storage medium, and a computer program product. The method includes: obtaining physical examination information and information of a plurality of distribution objects; inputting the physical examination information and the information of the plurality of distribution objects into an information matching model obtained by pre-training to obtain a matching degree between the physical examination information and the plurality of distribution objects; and determining a target distribution object from the plurality of distribution objects according to the matching degree between the physical examination information and each of the plurality of distribution objects, and distributing the physical examination information to the target distribution object.Type: GrantFiled: December 25, 2020Date of Patent: February 4, 2025Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventor: Zhenzhong Zhang
-
Patent number: 12212547Abstract: Embodiments of the present disclosure provide a method, a system and a non-transitory computer-readable medium to securely pass a message. The method includes executing, by a processing device, a floating persistent volumes service (FPVS) to allocate and attach persistent volume (PV) to a first node in a mesh network to pass a payload in the PV to the first node; and sending a first message to the first node to inform the first node to read data from the payload in the PV.Type: GrantFiled: January 21, 2022Date of Patent: January 28, 2025Assignee: Red Hat, Inc.Inventors: Leigh Griffin, Pierre-Yves Chibon
-
Patent number: 12204948Abstract: A database entry may be stored in a container in a database table corresponding with a partition key. The partition key may be determined by applying one or more partition rules to one or more data values associated with the database entry. The database entry may be an instance of one of a plurality of data object definitions associated with database entries in the database. Each of the data object definitions may identify a respective one or more data fields included within an instance of the data object definition.Type: GrantFiled: September 8, 2023Date of Patent: January 21, 2025Assignee: Salesforce, Inc.Inventor: Rohitashva Mathur
-
Patent number: 12197453Abstract: A method for performing a parallelized heapsort operation may include updating, by a first worker thread, a first buffer while a second worker thread updates a second buffer in parallel. The first worker thread may update the first buffer by adding, to the first buffer, elements from a first partition of a dataset. The second worker thread may update the second buffer by adding, to the second buffer, elements from a second partition of the dataset. Upon the first buffer reaching a threshold size, the first worker thread may acquire a lock for the first worker thread to update a heap based on the first buffer while the second worker thread is prevented from updating the heap based on the second buffer. A result of a top k query comprising a k quantity of smallest elements from the dataset may be generated based on the heap.Type: GrantFiled: August 22, 2023Date of Patent: January 14, 2025Assignee: SAP SEInventors: Alexander Gellner, Paul Willems
-
Patent number: 12197952Abstract: The disclosure relates to a method and apparatus including setting a memory swap size limit, the limit being lower than a memory swap size defining a maximum size of a part of said memory resources used for swap, obtaining a score for at least one running program, a high score corresponding to a low priority level, obtaining monitoring information representative of a monitored activity of the program during a time period and of a learnt user's habit of use of the program, including a number of times the program gained the focus within the time period. The disclosure also includes deriving a score delta from information with a decrement value to the score delta at each focus gained by the program, adjusting the score by adding the delta, and terminating execution when memory swap size limit is reached and the adjusted score reaches a memory swap size limit threshold.Type: GrantFiled: October 13, 2020Date of Patent: January 14, 2025Assignee: Thomson LicensingInventors: Bruno Le Garjan, Sebastien Crunchant, Thierry Quere
-
Patent number: 12197953Abstract: Apparatuses, systems, methods, and program products are disclosed for techniques for distributed computing and storage. An apparatus includes a processor and a memory that includes code that is executable to receive a request to perform a processing task, transmit at least a portion of the processing task to a plurality of user node devices, receive results of the at least a portion of the processing task from at least one of the plurality of user node devices, and transmit the received results.Type: GrantFiled: February 8, 2024Date of Patent: January 14, 2025Assignee: ASEARIS DATA SYSTEMS, INC.Inventors: Erich Pletsch, Matt Morris
-
Patent number: 12189625Abstract: A multi-cluster computing system which includes a query result caching system is presented. The multi-cluster computing system may include a data processing service and client devices communicatively coupled over a network. The data processing service may include a control layer and a data layer. The control layer may be configured to receive and process requests from the client devices and manage resources in the data layer. The data layer may be configured to include instances of clusters of computing resources for executing jobs. The data layer may include a data storage system, which further includes a remote query result cache Store. The query result cache store may include a cloud storage query result cache which stores data associated with results of previously executed requests. As such, when a cluster encounters a previously executed request, the cluster may efficiently retrieve the cached result of the request from the in-memory query result cache or the cloud storage query result cache.Type: GrantFiled: July 14, 2023Date of Patent: January 7, 2025Assignee: Databricks, Inc.Inventors: Bogdan Ionut Ghit, Saksham Garg, Christian Stuart, Christopher Stevens
-
Patent number: 12190157Abstract: Systems, methods, and apparatuses relating to circuitry to implement scalable port-binding for asymmetric execution ports and allocation widths of a processor are described.Type: GrantFiled: September 26, 2020Date of Patent: January 7, 2025Assignee: Intel CorporationInventors: Daeho Seo, Vikash Agarwal, John Esper, Khary Alexander, Asavari Paranjape, Jonathan Combs
-
Patent number: 12190154Abstract: Controlling allocation of resources in network function virtualization. Data defining a pool of available physical resources is maintained. Data defining one or more resource allocation rules is identified. An application request is received. Physical resources from the pool are allocated to virtual resources to implement the application request, on the basis of the maintained data, the identified data and the received application request.Type: GrantFiled: December 17, 2023Date of Patent: January 7, 2025Assignee: SUSE LLCInventors: Ignacio Aldama, Ruben Sevilla Giron, Javier Garcia-Lopez
-
Patent number: 12192051Abstract: Some embodiments of the invention provide a method for implementing an edge device that handles data traffic between a logical network and an external network. The method monitors resource usage of a node pool that includes multiple nodes that each executes a respective set of pods. Each of the pods is for performing a respective set of data message processing operations for at least one of multiple logical routers. The method determines that a particular node in the node pool has insufficient resources for the particular node's respective set of pods to adequately perform their respective sets of data message processing operations. Based on the determination, the method automatically provides additional resources to the node pool by instantiating at least one additional node in the node pool.Type: GrantFiled: July 23, 2021Date of Patent: January 7, 2025Assignee: VMware LLCInventors: Yong Wang, Cheng-Chun Tu, Sreeram Kumar Ravinoothala, Yu Ying
-
Patent number: 12189621Abstract: A system for enhanced data pre-aggregation is provided. In one embodiment, a method is provided that includes receiving data formatted in a key/subkey format and distributing a data batch of the data to a plurality of processing threads. Each processing thread performs operations of: performing a first pass on the data batch to determine subkey rollup data; performing a second pass on the data batch to determine key rollup data; and storing the subkey rollup data and the key rollup data into data blocks. The method also includes outputting the data blocks to form a pre-aggregated data cube.Type: GrantFiled: February 9, 2023Date of Patent: January 7, 2025Assignee: Planful, Inc.Inventors: Tarun Adupa, Abdul Hamed Mohammed, Sanjay Vyas
-
Patent number: 12182746Abstract: A task scheduling system that can be used to improve task assignment for multiple satellites, and thereby improve resource allocation in the execution of a task. In some implementations, configuration data for one or more satellites is obtained. Multiple objectives corresponding to a task to be performed using the satellites, and resource parameters associated with executing the task to be performed using the satellites are identified. A score for each objective included in the multiple objectives is computed by the terrestrial scheduler based on the resource parameters and the configuration data for the one or more satellites. The multiple objectives are assigned to one or more of the satellites. Instructions are provided to the one or more satellites that cause the one or more satellites to execute the task according to the assignment of the objectives to the one or more satellites.Type: GrantFiled: June 26, 2023Date of Patent: December 31, 2024Assignee: HawkEye 360, Inc.Inventors: T. Charles Clancy, Robert W. McGwier, Timothy James O'Shea, Nicholas Aaron McCarthy
-
Patent number: 12182045Abstract: A semiconductor device capable of preventing a sharp variation in current consumption in neural network processing is provided. A dummy circuit outputs dummy data to at least one or more of n number of MAC circuits and causes the at least one or more of n number of MAC circuits to perform a dummy calculation and to output dummy output data. An output-side DMA controller transfers pieces of normal output data from the n number of MAC circuits to a memory, by use of n number of channels, respectively, and does not transfer the dummy output data to the memory. In this semiconductor device, the at least one or more of n number of MAC circuits perform the dummy calculation in a period from a timing at which the output-side DMA controller ends data transfer to the memory to a timing at which the input-side DMA controller starts data transfer from the memory.Type: GrantFiled: January 10, 2023Date of Patent: December 31, 2024Assignee: RENESAS ELECTRONICS CORPORATIONInventors: Kazuaki Terashima, Atsushi Nakamura, Rajesh Ghimire
-
Patent number: 12182625Abstract: An apparatus can include a control board operatively coupled to a modular compute boards and to a resource boards by (1) a first connection associated with control information and not data, and (2) a second connection associated with data and not control information. The control board can determine a computation load and a physical resource requirement for a time period. The control board can send, to the modular compute board and via the first connection, a signal indicating an allocation of that modular compute board during the time period. The control board can send, from the control board to the resource board, a signal indicating an allocation of that resource board to the modular compute board such that that resource board allocates at least a portion of its resources during the time period based on at least one of the computation load or the physical resource requirement.Type: GrantFiled: May 12, 2023Date of Patent: December 31, 2024Assignee: Management Services Group, Inc.Inventors: Thomas Scott Morgan, Steven Yates
-
Patent number: 12182618Abstract: In one embodiment, a processor includes a power controller having a resource allocation circuit. The resource allocation circuit may: receive a power budget for a first core and at least one second core and scale the power budget based at least in part on at least one energy performance preference value to determine a scaled power budget; determine a first maximum operating point for the first core and a second maximum operating point for the at least one second core based at least in part on the scaled power budget; determine a first efficiency value for the first core based at least in part on the first maximum operating point for the first core and a second efficiency value for the at least one second core based at least in part on the second maximum operating point for the at least one second core; and report a hardware state change to an operating system scheduler based on the first efficiency value and the second efficiency value. Other embodiments are described and claimed.Type: GrantFiled: May 24, 2023Date of Patent: December 31, 2024Assignee: Intel CorporationInventors: Praveen Kumar Gupta, Avinash N. Ananthakrishnan, Eugene Gorbatov, Stephen H. Gunther
-
Patent number: 12175294Abstract: Methods and apparatus to manage workload domains in virtual server racks are disclosed. An example apparatus includes processor circuitry to, in response to detecting that a number of available physical racks satisfies a threshold number of physical racks, apply a first resource allocation technique by reserving requested resources by exhausting first available resources of a first physical rack before using second available resources of a second physical rack; in response to detecting that the number of available physical racks does not satisfy the threshold number of physical racks, apply a second resource allocation technique by reserving the requested resources using a portion of the first available resources without exhausting the first available resources and using a portion of the second available resources without exhausting the second available resources; and execute one or more workload domains associated with a number of requested resources.Type: GrantFiled: September 30, 2021Date of Patent: December 24, 2024Assignee: VMware LLCInventors: Prafull Kumar, Jason Anthony Lochhead, Konstantin Ivanov Spirov
-
Patent number: 12169490Abstract: Methods, systems, and computer programs are presented for providing a cluster view method of a database to perform compaction and clustering of database objects, such as database materialized view. A cluster view system identifies a materialized view including data from one or more base tables, a portion of the data of the materialized view including stale data. The cluster view system performs an integrated task within a maintenance operation on a database, the integrated task including compacting the materialized view, the maintenance operation including clustering the materialized view, and stores the compacted and clustered materialized view in the database.Type: GrantFiled: February 27, 2023Date of Patent: December 17, 2024Assignee: Snowflake Inc.Inventors: Varun Ganesh, Saiyang Gou, Prasanna Rajaperumal, Wenhao Song, Libo Wang, Jiaqi Yan