Resource Allocation Patents (Class 718/104)
-
Patent number: 12260266Abstract: A system and method of balancing data storage among a plurality of groups of computing devices, each group comprising one or more respective computing devices. The method may involve determining a compute utilization disparity between the group having a highest level of compute utilization and the group having a lowest level of compute utilization, determining a transfer of one or more projects between the plurality of groups of computing devices that reduces the compute utilization disparity, and directing the plurality of groups of computing devices to execute the determined transfer.Type: GrantFiled: March 10, 2022Date of Patent: March 25, 2025Assignee: Google LLCInventors: Alan Pearson, Yaou Wei
-
Patent number: 12260253Abstract: A machine learning accelerator (MLA) implements a machine learning network (MLN) by using data transfer instructions that coordinate concurrent data transfers between processing elements. A compiler receives a description of a machine learning network and generates the computer program that implements the MLN. The computer program contains instructions that will be run on PEs of the MLA. The PEs are connected by data transfer paths that are known to the compiler. The computations performed by the PEs may require data stored at other PEs. The compiler coordinates the data transfers to avoid conflicts and increase parallelism.Type: GrantFiled: January 23, 2023Date of Patent: March 25, 2025Assignee: SiMa Technologies, Inc.Inventor: Gwangho Kim
-
Patent number: 12260255Abstract: Intelligent process management is provided. A start time is determined for an additional process to be run on a worker node within a duration of a sleep state of a task of a process already running on the worker node by adding a first defined buffer time to a determined start time of the sleep state of the task. A backfill time is determined for the additional process by subtracting a second defined buffer time from a determined end time of the sleep state of the task. A scheduling plan is generated for the additional process based on the start time and the backfill time corresponding to the additional process. The scheduling plan is executed to run the additional process on the worker node according to the start time and the backfill time corresponding to the additional process.Type: GrantFiled: September 29, 2022Date of Patent: March 25, 2025Assignee: International Business Machines CorporationInventors: Jing Jing Wei, Yue Wang, Shu Jun Tang, Yang Kang, Yi Fan Wu, Qi Han Zheng, Jia Lin Wang
-
Patent number: 12255818Abstract: Provided herein are various enhancements for low downtime migration of virtualized software services across different instantiations, which may include real-time migration over different cloud/server providers and platforms, physical locations, network locations, and across different network elements. Examples herein include handling of migration of data and state for virtualized software services, migration of ingress and egress traffic for the software services, and migration of other various operations aspects applicable to virtualized software services. In many instances, a client node retains the same network addressing used to reach the virtualized software services even as the virtualized software services move to different network locations and physical locations.Type: GrantFiled: October 25, 2024Date of Patent: March 18, 2025Assignee: Loophole Labs, Inc.Inventors: Shivansh Vij, Alex Sørlie
-
Patent number: 12248818Abstract: The present application discloses a method, system, and computer system for starting up and maintaining a cluster in a warmed up state, and/or allocating clusters from a warmed up state. The method includes instantiating a set of virtual machines, wherein instantiating the set of virtual machines includes setting a temporary security credential for each virtual machine of the set of virtual machines, receiving a virtual machine allocation request associated with a workspace, a customer, or a tenant, in response to the virtual machine allocation request: allocating a virtual machine, wherein allocating the virtual machine comprises replacing the temporary security credential with a security credential associated with the workspace, the customer, or the tenant.Type: GrantFiled: October 29, 2021Date of Patent: March 11, 2025Assignee: Databricks, Inc.Inventors: Yandong Mao, Aaron Daniel Davidson
-
Patent number: 12242511Abstract: A method and apparatus for managing a set of storage resources for a set of queries is described. In an exemplary embodiment, a method provisions processing resources of an execution platform and provisions storage resources of a storage platform. The execution platform uses the storage platform, which is shared with the execution platform, to process the set of queries. The method changes a number of the storage resources provisioned for the storage platform based on a storage capacity utilization by the set of queries of the storage resources. The method changes the number of the storage resources independently to a change of the processing resources in the execution platform. The method processes the set of queries using the changed number of the storage resources provisioned for the storage platform.Type: GrantFiled: February 7, 2023Date of Patent: March 4, 2025Assignee: Snowflake Inc.Inventors: Benoit Dageville, Thierry Cruanes, Marcin Zukowski
-
Patent number: 12244571Abstract: A method, system, and computer program product are disclosed. The method includes generating simulation instances and particle site identifiers from a particle-based simulation. The method also includes providing an order metric for the particle-based simulation. Information is embedded in the particle-based simulation by mapping local order values of the order metric to characters of an input message.Type: GrantFiled: April 7, 2022Date of Patent: March 4, 2025Assignee: International Business Machines CorporationInventors: Fausto Martelli, Malgorzata Jadwiga Zimon
-
Patent number: 12235774Abstract: Devices and techniques for parking threads in a barrel processor for managing cache eviction requests are described herein. A barrel processor includes eviction circuitry and is configured to perform operations to: (a) detect a thread that includes a memory access operation, the thread entering a memory request pipeline of the barrel processor; (b) determine that a data cache line has to be evicted from a data cache for the thread to perform the memory access operation; (c) copy the thread into a park queue; (d) evict a data cache line from the data cache; (e) identify an empty cycle in the memory request pipeline; (f) schedule the thread to execute during the empty cycle; and (g) remove the thread from the park queue.Type: GrantFiled: February 16, 2024Date of Patent: February 25, 2025Assignee: Micron Technology, Inc.Inventor: Christopher Baronne
-
Patent number: 12236272Abstract: Resource access control modules that are part of an operating system kernel and data structures visible in both user space and kernel space provide for user space-based configuration of computing system resource limits, accounting of resource usage, and enforcement of resource usage limits. Computing system resource limits can be set on an application, customer, or other basis, and usage limits can be placed on various system resources, such as files, ports, I/O devices, memory, and processing unit bandwidth. Resource usage accounting and resource limit enforcement can be implemented without the use of in-kernel control groups. The resource access control modules can be extended Berkeley Program Format (eBPF) Linux Security Module (LSM) programs linked to LSM hooks in the Linux operation system kernel.Type: GrantFiled: November 11, 2021Date of Patent: February 25, 2025Assignee: Intel CorporationInventor: Mikko Ylinen
-
Patent number: 12236290Abstract: The disclosure relates to a method, apparatus and device for sharing microservice application data. The method includes: managing, through data registration management, memory data registration information that is to be loaded by microservice application clusters; determining, according to the memory data registration information, memory data that are required by the microservice application clusters; partitioning and distributing the memory data to a plurality of memory computation service nodes in the microservice application clusters, and deploying the plurality of memory computation service nodes into a corresponding microservice application cluster at a proximal end; and loading the memory data in a preset manner in the plurality of memory computation service nodes, and sharing a corresponding memory computation service node in real time under the condition that the memory data change.Type: GrantFiled: May 16, 2024Date of Patent: February 25, 2025Assignee: INSPUR GENERSOFT CO., LTD.Inventors: Daisen Wei, Weibo Zheng, Yucheng Li, Xiangguo Zhou, Lixin Sun
-
Patent number: 12236262Abstract: Features are extracted and/or derived from a software package (e.g., a binary executable, etc.) which are input into a machine learning model to determine an estimated peak memory usage required to analyze the software package. A number of memory resource units required for the determined peak memory usage is then determined. If the number of available memory resource units is less than the determined number of required memory resource units, then the software package can be queue in a backoff queue. The determined number of required memory units to analyze the software package can be allocated when a number of available memory resource units equals or exceeds the determined number of required memory resource units (whether or not the software package has been queued). The software package can then be analyzed using the allocated memory units. Information characterizing this analysis can be provided to a consuming application or process.Type: GrantFiled: October 2, 2024Date of Patent: February 25, 2025Assignee: Binarly IncInventors: Alexander Matrosov, Sam Lloyd Thomas, Yegor Vasilenko, Lukas Seidel
-
Patent number: 12229588Abstract: Migrating workloads to a preferred environment, including: predicting, for each of a plurality of environments, a performance load on each of a plurality of environments that would result from placing one or more of a plurality of workloads on the environment; determining a preferred environment for each of the plurality of workloads by determining a placement of each of the plurality of workloads that results in a best fit for the plurality of workloads; and deploying each of the plurality of workloads in the corresponding preferred environment.Type: GrantFiled: November 30, 2021Date of Patent: February 18, 2025Assignee: PURE STORAGEInventors: Robert Barker, Jr., Farhan Abrol
-
Patent number: 12229596Abstract: A method of storing electronic data performed by a terminal apparatus communicable with an information processing terminal is provided. The method includes: receiving, during a use of a first resource, a notification indicating that reservation of a second resource selected by a user is completed, from the information processing terminal; and in response to receiving the notification indicating that the reservation of the second resource is completed, starting a storing process of storing electronic data output by an electronic device during the use of the first resource.Type: GrantFiled: July 1, 2021Date of Patent: February 18, 2025Assignee: Ricoh Company, Ltd.Inventor: Ken Norota
-
Patent number: 12229045Abstract: In some examples, a sensor service receives an indication of interest from a client for sensor data of a first sensor of the plurality of sensors, and allocates buffers in the memory for the plurality of sensors. The sensor service provides a first buffer to a sensor connector that is to receive the sensor data from the first sensor, and receives, from the sensor connector, an indication that the first buffer in the memory has been written with the sensor data from the first sensor. Based on the indication of interest from the client, the sensor service notifies the client that the first buffer is available for reading by the client from the memory.Type: GrantFiled: September 25, 2023Date of Patent: February 18, 2025Assignee: BlackBerry LimitedInventors: Michael Jonathan Mueller, Noel Dylan Dillabough
-
Patent number: 12232061Abstract: Certain aspects of the present disclosure provide techniques for sidelink synchronization in a network. A method that may be performed by a remote user equipment (UE) includes determining at least one synchronization priority associated with synchronization signals for synchronizing to a network, determining relay capability information associated with multiple relay UEs, selecting one relay UE of the multiple relay UEs, based on the synchronization priority and relay capability, and synchronizing to the network using at least one synchronization signal received from the selected one relay UE.Type: GrantFiled: August 20, 2021Date of Patent: February 18, 2025Assignee: QUALCOMM IncorporatedInventors: Kaidong Wang, Jelena Damnjanovic, Sony Akkarakaran, Junyi Li, Tao Luo
-
Patent number: 12217090Abstract: An algorithm execution management system of a provider network may receive a request from a user for executing an algorithm using different types of computing resources, including classical computing resources and quantum computing resources. The request may indicate a container that includes the algorithm code and dependencies such as libraries for executing the algorithm. The algorithm execution management system may first determine that the quantum computing resources are available to execute the algorithm, and then cause the classical computing resources to be provisioned. The algorithm execution management system may cause at least one portion of the algorithm to be executed at the classical computing resources using the container indicated by the user, and at least another portion of the algorithm to be executed at the quantum computing resources. The quantum task of the algorithm may be provided a priority during execution of the algorithm for using the quantum computing resources.Type: GrantFiled: November 12, 2021Date of Patent: February 4, 2025Assignee: Amazon Technologies, Inc.Inventors: Milan Krneta, Eric M Kessler, Christian Bruun Madsen
-
Patent number: 12216628Abstract: A system to identify optimal cloud resources for executing workloads. The system deduplicates historical client queries based on a workload selection configuration to determine a grouping of historical client queries. The system generates a workload based on at least a portion of the grouping of historical client queries. The system repeatedly executes a test run of the workload using resources of a cloud environment to determine whether there is a performance difference in the test run. The system, in response to determining that there is no performance difference, identifies one or more sets of decreased resources of the cloud environment. The system re-executes the test run using the one or more sets of decreased resources of the cloud environment to determine whether there is a performance difference in the test run that is attributed to the one or more sets of decreased resources of the cloud environment.Type: GrantFiled: September 20, 2023Date of Patent: February 4, 2025Assignee: Snowflake Inc.Inventors: Allison Lee, Shrainik Jain, Qiuye Jin, Stratis Viglas, Jiaqi Yan
-
Patent number: 12217086Abstract: Techniques are disclosed for chain schedule management for machine learning model-based processing in a computing environment. For example, a method receives a machine learning model-based request and determines a scheduling decision for execution of the machine learning model-based request. Determination of the scheduling decision comprises utilizing a set of one or more scheduling algorithms and comparing results of at least a portion of the set of one or more scheduling algorithms to identify execution environments of a computing environment in which the machine learning model-based request is to be executed. The identified execution environments may then be managed to execute the machine learning model-based request.Type: GrantFiled: February 25, 2022Date of Patent: February 4, 2025Assignee: Dell Products L.P.Inventor: Victor Fong
-
Patent number: 12217838Abstract: There are provided a method and an apparatus for distributing physical examination information, an electronic device, a computer-readable storage medium, and a computer program product. The method includes: obtaining physical examination information and information of a plurality of distribution objects; inputting the physical examination information and the information of the plurality of distribution objects into an information matching model obtained by pre-training to obtain a matching degree between the physical examination information and the plurality of distribution objects; and determining a target distribution object from the plurality of distribution objects according to the matching degree between the physical examination information and each of the plurality of distribution objects, and distributing the physical examination information to the target distribution object.Type: GrantFiled: December 25, 2020Date of Patent: February 4, 2025Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventor: Zhenzhong Zhang
-
Patent number: 12212547Abstract: Embodiments of the present disclosure provide a method, a system and a non-transitory computer-readable medium to securely pass a message. The method includes executing, by a processing device, a floating persistent volumes service (FPVS) to allocate and attach persistent volume (PV) to a first node in a mesh network to pass a payload in the PV to the first node; and sending a first message to the first node to inform the first node to read data from the payload in the PV.Type: GrantFiled: January 21, 2022Date of Patent: January 28, 2025Assignee: Red Hat, Inc.Inventors: Leigh Griffin, Pierre-Yves Chibon
-
Patent number: 12204948Abstract: A database entry may be stored in a container in a database table corresponding with a partition key. The partition key may be determined by applying one or more partition rules to one or more data values associated with the database entry. The database entry may be an instance of one of a plurality of data object definitions associated with database entries in the database. Each of the data object definitions may identify a respective one or more data fields included within an instance of the data object definition.Type: GrantFiled: September 8, 2023Date of Patent: January 21, 2025Assignee: Salesforce, Inc.Inventor: Rohitashva Mathur
-
Patent number: 12197952Abstract: The disclosure relates to a method and apparatus including setting a memory swap size limit, the limit being lower than a memory swap size defining a maximum size of a part of said memory resources used for swap, obtaining a score for at least one running program, a high score corresponding to a low priority level, obtaining monitoring information representative of a monitored activity of the program during a time period and of a learnt user's habit of use of the program, including a number of times the program gained the focus within the time period. The disclosure also includes deriving a score delta from information with a decrement value to the score delta at each focus gained by the program, adjusting the score by adding the delta, and terminating execution when memory swap size limit is reached and the adjusted score reaches a memory swap size limit threshold.Type: GrantFiled: October 13, 2020Date of Patent: January 14, 2025Assignee: Thomson LicensingInventors: Bruno Le Garjan, Sebastien Crunchant, Thierry Quere
-
Patent number: 12197453Abstract: A method for performing a parallelized heapsort operation may include updating, by a first worker thread, a first buffer while a second worker thread updates a second buffer in parallel. The first worker thread may update the first buffer by adding, to the first buffer, elements from a first partition of a dataset. The second worker thread may update the second buffer by adding, to the second buffer, elements from a second partition of the dataset. Upon the first buffer reaching a threshold size, the first worker thread may acquire a lock for the first worker thread to update a heap based on the first buffer while the second worker thread is prevented from updating the heap based on the second buffer. A result of a top k query comprising a k quantity of smallest elements from the dataset may be generated based on the heap.Type: GrantFiled: August 22, 2023Date of Patent: January 14, 2025Assignee: SAP SEInventors: Alexander Gellner, Paul Willems
-
Patent number: 12197953Abstract: Apparatuses, systems, methods, and program products are disclosed for techniques for distributed computing and storage. An apparatus includes a processor and a memory that includes code that is executable to receive a request to perform a processing task, transmit at least a portion of the processing task to a plurality of user node devices, receive results of the at least a portion of the processing task from at least one of the plurality of user node devices, and transmit the received results.Type: GrantFiled: February 8, 2024Date of Patent: January 14, 2025Assignee: ASEARIS DATA SYSTEMS, INC.Inventors: Erich Pletsch, Matt Morris
-
Patent number: 12189625Abstract: A multi-cluster computing system which includes a query result caching system is presented. The multi-cluster computing system may include a data processing service and client devices communicatively coupled over a network. The data processing service may include a control layer and a data layer. The control layer may be configured to receive and process requests from the client devices and manage resources in the data layer. The data layer may be configured to include instances of clusters of computing resources for executing jobs. The data layer may include a data storage system, which further includes a remote query result cache Store. The query result cache store may include a cloud storage query result cache which stores data associated with results of previously executed requests. As such, when a cluster encounters a previously executed request, the cluster may efficiently retrieve the cached result of the request from the in-memory query result cache or the cloud storage query result cache.Type: GrantFiled: July 14, 2023Date of Patent: January 7, 2025Assignee: Databricks, Inc.Inventors: Bogdan Ionut Ghit, Saksham Garg, Christian Stuart, Christopher Stevens
-
Patent number: 12192051Abstract: Some embodiments of the invention provide a method for implementing an edge device that handles data traffic between a logical network and an external network. The method monitors resource usage of a node pool that includes multiple nodes that each executes a respective set of pods. Each of the pods is for performing a respective set of data message processing operations for at least one of multiple logical routers. The method determines that a particular node in the node pool has insufficient resources for the particular node's respective set of pods to adequately perform their respective sets of data message processing operations. Based on the determination, the method automatically provides additional resources to the node pool by instantiating at least one additional node in the node pool.Type: GrantFiled: July 23, 2021Date of Patent: January 7, 2025Assignee: VMware LLCInventors: Yong Wang, Cheng-Chun Tu, Sreeram Kumar Ravinoothala, Yu Ying
-
Patent number: 12190157Abstract: Systems, methods, and apparatuses relating to circuitry to implement scalable port-binding for asymmetric execution ports and allocation widths of a processor are described.Type: GrantFiled: September 26, 2020Date of Patent: January 7, 2025Assignee: Intel CorporationInventors: Daeho Seo, Vikash Agarwal, John Esper, Khary Alexander, Asavari Paranjape, Jonathan Combs
-
Patent number: 12190154Abstract: Controlling allocation of resources in network function virtualization. Data defining a pool of available physical resources is maintained. Data defining one or more resource allocation rules is identified. An application request is received. Physical resources from the pool are allocated to virtual resources to implement the application request, on the basis of the maintained data, the identified data and the received application request.Type: GrantFiled: December 17, 2023Date of Patent: January 7, 2025Assignee: SUSE LLCInventors: Ignacio Aldama, Ruben Sevilla Giron, Javier Garcia-Lopez
-
Patent number: 12189621Abstract: A system for enhanced data pre-aggregation is provided. In one embodiment, a method is provided that includes receiving data formatted in a key/subkey format and distributing a data batch of the data to a plurality of processing threads. Each processing thread performs operations of: performing a first pass on the data batch to determine subkey rollup data; performing a second pass on the data batch to determine key rollup data; and storing the subkey rollup data and the key rollup data into data blocks. The method also includes outputting the data blocks to form a pre-aggregated data cube.Type: GrantFiled: February 9, 2023Date of Patent: January 7, 2025Assignee: Planful, Inc.Inventors: Tarun Adupa, Abdul Hamed Mohammed, Sanjay Vyas
-
Patent number: 12182618Abstract: In one embodiment, a processor includes a power controller having a resource allocation circuit. The resource allocation circuit may: receive a power budget for a first core and at least one second core and scale the power budget based at least in part on at least one energy performance preference value to determine a scaled power budget; determine a first maximum operating point for the first core and a second maximum operating point for the at least one second core based at least in part on the scaled power budget; determine a first efficiency value for the first core based at least in part on the first maximum operating point for the first core and a second efficiency value for the at least one second core based at least in part on the second maximum operating point for the at least one second core; and report a hardware state change to an operating system scheduler based on the first efficiency value and the second efficiency value. Other embodiments are described and claimed.Type: GrantFiled: May 24, 2023Date of Patent: December 31, 2024Assignee: Intel CorporationInventors: Praveen Kumar Gupta, Avinash N. Ananthakrishnan, Eugene Gorbatov, Stephen H. Gunther
-
Patent number: 12182045Abstract: A semiconductor device capable of preventing a sharp variation in current consumption in neural network processing is provided. A dummy circuit outputs dummy data to at least one or more of n number of MAC circuits and causes the at least one or more of n number of MAC circuits to perform a dummy calculation and to output dummy output data. An output-side DMA controller transfers pieces of normal output data from the n number of MAC circuits to a memory, by use of n number of channels, respectively, and does not transfer the dummy output data to the memory. In this semiconductor device, the at least one or more of n number of MAC circuits perform the dummy calculation in a period from a timing at which the output-side DMA controller ends data transfer to the memory to a timing at which the input-side DMA controller starts data transfer from the memory.Type: GrantFiled: January 10, 2023Date of Patent: December 31, 2024Assignee: RENESAS ELECTRONICS CORPORATIONInventors: Kazuaki Terashima, Atsushi Nakamura, Rajesh Ghimire
-
Patent number: 12182746Abstract: A task scheduling system that can be used to improve task assignment for multiple satellites, and thereby improve resource allocation in the execution of a task. In some implementations, configuration data for one or more satellites is obtained. Multiple objectives corresponding to a task to be performed using the satellites, and resource parameters associated with executing the task to be performed using the satellites are identified. A score for each objective included in the multiple objectives is computed by the terrestrial scheduler based on the resource parameters and the configuration data for the one or more satellites. The multiple objectives are assigned to one or more of the satellites. Instructions are provided to the one or more satellites that cause the one or more satellites to execute the task according to the assignment of the objectives to the one or more satellites.Type: GrantFiled: June 26, 2023Date of Patent: December 31, 2024Assignee: HawkEye 360, Inc.Inventors: T. Charles Clancy, Robert W. McGwier, Timothy James O'Shea, Nicholas Aaron McCarthy
-
Patent number: 12182625Abstract: An apparatus can include a control board operatively coupled to a modular compute boards and to a resource boards by (1) a first connection associated with control information and not data, and (2) a second connection associated with data and not control information. The control board can determine a computation load and a physical resource requirement for a time period. The control board can send, to the modular compute board and via the first connection, a signal indicating an allocation of that modular compute board during the time period. The control board can send, from the control board to the resource board, a signal indicating an allocation of that resource board to the modular compute board such that that resource board allocates at least a portion of its resources during the time period based on at least one of the computation load or the physical resource requirement.Type: GrantFiled: May 12, 2023Date of Patent: December 31, 2024Assignee: Management Services Group, Inc.Inventors: Thomas Scott Morgan, Steven Yates
-
Patent number: 12175294Abstract: Methods and apparatus to manage workload domains in virtual server racks are disclosed. An example apparatus includes processor circuitry to, in response to detecting that a number of available physical racks satisfies a threshold number of physical racks, apply a first resource allocation technique by reserving requested resources by exhausting first available resources of a first physical rack before using second available resources of a second physical rack; in response to detecting that the number of available physical racks does not satisfy the threshold number of physical racks, apply a second resource allocation technique by reserving the requested resources using a portion of the first available resources without exhausting the first available resources and using a portion of the second available resources without exhausting the second available resources; and execute one or more workload domains associated with a number of requested resources.Type: GrantFiled: September 30, 2021Date of Patent: December 24, 2024Assignee: VMware LLCInventors: Prafull Kumar, Jason Anthony Lochhead, Konstantin Ivanov Spirov
-
Patent number: 12169490Abstract: Methods, systems, and computer programs are presented for providing a cluster view method of a database to perform compaction and clustering of database objects, such as database materialized view. A cluster view system identifies a materialized view including data from one or more base tables, a portion of the data of the materialized view including stale data. The cluster view system performs an integrated task within a maintenance operation on a database, the integrated task including compacting the materialized view, the maintenance operation including clustering the materialized view, and stores the compacted and clustered materialized view in the database.Type: GrantFiled: February 27, 2023Date of Patent: December 17, 2024Assignee: Snowflake Inc.Inventors: Varun Ganesh, Saiyang Gou, Prasanna Rajaperumal, Wenhao Song, Libo Wang, Jiaqi Yan
-
Patent number: 12169510Abstract: A system and a method are disclosed for receiving, from a source of a plurality of candidate sources, a payload comprising content and metadata. The system selects a destination to which to route the payload based on the source and the content, and generates an entry at the destination based on the content. The system inputs the metadata into a classification engine, and receives, as output from the classification engine, one or more classifications for the payload. The system applies a metadata tag to the entry, the metadata tag indicating the one or more classifications. The system receives a search request from a client device specifying at least one of the one or more classifications, and, in response to receiving the search request, provides the entry to the client device based on a matching classification.Type: GrantFiled: July 20, 2023Date of Patent: December 17, 2024Assignee: Tekion CorpInventors: Satyavrat Mudgil, Anant Sitaram
-
Patent number: 12166732Abstract: Disclosed embodiments provide a framework for implementing automated bots configured to automatically and in real-time process messages exchanged with a user to determine whether to present an opt-in offer for supplemental communications. An agent bot processes ongoing messages exchanged in real-time during a first communications session as these messages are exchanged to determine whether to present an opt-in authorization request for supplemental communications. If the user approves the request, contact information associated with the user is used to facilitate a second communications session through which the user is prompted to provide an opt-in confirmation. The opt-in confirmation and the approval of the opt-in authorization request is provided to allow for transmission of the supplemental communications to the user.Type: GrantFiled: October 2, 2023Date of Patent: December 10, 2024Assignee: LIVEPERSON, INC.Inventors: Ponsivakumar Palraj, Kuntal Mehta
-
Patent number: 12164966Abstract: A system and method of dynamic task allocation and warehouse scaling. The method includes receiving a request to process a task. The method includes monitoring a plurality of execution nodes of a datastore to determine a plurality of central processing unit (CPU) utilizations. Each CPU utilization of the plurality of CPU utilizations is associated with a respective execution node of the plurality of execution nodes. The method includes identifying, by a processing device based on the plurality of CPU utilizations, a particular execution node associated with a maximum CPU utilization to process the task. The method includes allocating the task to the particular execution node.Type: GrantFiled: July 12, 2023Date of Patent: December 10, 2024Assignee: Snowflake Inc.Inventors: Ganeshan Ramachandran Iyer, Raghav Ramachandran, Yang Wang
-
Patent number: 12164928Abstract: A system booting method for a computer system having a plurality of central processing units and a booting unit is disclosed. The system booting method includes determining, by the booting unit, a booting mode of the computer system; transmitting a booting signal, which is related to the booting mode, to the plurality of CPUs of the computer system; and entering a multi-CPU booting mode or entering an independent booting mode of the plurality CPUs according to the booting signal.Type: GrantFiled: January 25, 2022Date of Patent: December 10, 2024Assignee: Wiwynn CorporationInventors: Yun-Hsuan Lee, Yu-Shu Kao, Chi-Chun Yuan, Huai-Li Huang
-
Patent number: 12159159Abstract: The method and technique involves calculating workload usage models from multiple data sources for IoT backbone infrastructure platforms used in device-to-cloud communication. Based on these built models, the simulator uses virtual connected devices to predict machine size, number of machines, storage & network resource options required for the IoT backbone. Validated sets are then benchmarked data is then fed to a machine learning algorithm, which then recommends optimal outcomes for cloud based IoT backend platforms including machine sizes, number of machines, storage & network options and costs across various cloud providers like AWS, GCP & Azure.Type: GrantFiled: October 3, 2023Date of Patent: December 3, 2024Inventor: Raghunath Anisingaraju
-
Patent number: 12158757Abstract: Provided is a robotic refuse container system, including: a first robotic refuse container, including: a chassis; a set of wheels; a rechargeable battery; a processor; a refuse container; a plurality of sensors; and a medium storing instructions that when executed by the processor effectuates operations including: collecting sensor data; determining a movement path of the first robotic refuse container from a first location to a second location; and pairing the first robotic refuse container with an application of a communication device; and the application of the communication device, configured to: receive at least one input designating at least a schedule, an instruction to navigate the first robotic refuse container to a particular location, and a second movement path of the first robotic refuse container; and display a status; wherein the first robotic refuse container remains parked at the first location until receiving an instruction to execute a particular action.Type: GrantFiled: September 9, 2021Date of Patent: December 3, 2024Assignee: AI IncorporatedInventor: Ali Ebrahimi Afrouzi
-
Patent number: 12158965Abstract: Provided is a design method for sharing a profile in a container environment, including: extracting a sensitive context defined as information related to system-based access control or a sandboxing policy and an insensitive context defined as information unrelated to security for a profile provided by a developer; extracting the sensitive context and the insensitive context for the profile provided by a host; fetching a max configuration for the sensitive and insensitive contexts from each image layer of the developer; and generating a final profile that is applied to deploy the container by merging the host profile with the max configuration fetched from the developer profile. Accordingly, it is possible to provide an optimal environment to developers and hosts by generating the final profile with a hierarchical model using the host profile and the developer profile.Type: GrantFiled: July 29, 2022Date of Patent: December 3, 2024Assignee: FOUNDATION OF SOONGSIL UNIVERSITY-INDUSTRY COOPERATIONInventors: Soohwan Jung, Ngoc-Tu Chau, Thien-Phuc Doan, Songi Gwak
-
Patent number: 12153964Abstract: A configurable logic platform may include a physical interconnect for connecting to a processing system, first and second reconfigurable logic regions, a configuration port for applying configuration data to the first and second reconfigurable logic regions, and a reconfiguration logic function accessible via transactions of the physical interconnect, the reconfiguration logic function providing restricted access to the configuration port from the physical interconnect. The platform may include a first interface function providing an interface to the first reconfigurable logic region and a second interface function providing an interface to the first reconfigurable logic region. The first and second interface functions may allow information to be transmitted over the physical interconnect and prevent the respective reconfigurable logic region from directly accessing the physical interconnect.Type: GrantFiled: December 22, 2023Date of Patent: November 26, 2024Assignee: ThroughPuter, Inc.Inventor: Mark Henrik Sandstrom
-
Patent number: 12155717Abstract: System model is established to characterize mobile devices, edge servers, tasks and nodes. Node unloading rule is established, and the mobile device can be selected to unload the nodes to the edge server or leave the nodes to be executed locally. Timeline model is established to record arrival events of all tasks and execution completion events of the nodes. Online multi-workflow scheduling policy based on reinforcement learning is established, state space and action space of scheduling problem are defined, and reward function of the scheduling problem is designed. Algorithm based on policy gradient is designed to solve online multi-workflow scheduling problem for implementing the scheduling policy. Unloading decision and resource allocation are performed based on features extracted by graph convolution neural network. Current workflow and state of the server can be analyzed in real time, thereby reducing complexity and average completion time of all workflows.Type: GrantFiled: July 20, 2023Date of Patent: November 26, 2024Assignee: Hangzhou Dianzi UniversityInventors: Yuyu Yin, Binbin Huang, Zixin Huang
-
Patent number: 12155718Abstract: An example method of distributed load balancing in a virtualized computing system includes: configuring, at a logical load balancer, a traffic detector to detect traffic to a virtual internet protocol address (VIP) of an application having a plurality of instances; detecting, at the traffic detector, a first request to the VIP from a client executing in a virtual machine (VM) supported by a hypervisor executing on a first host; sending, by a configuration distributor of the logical load balancer in response to the detecting, a load balancer configuration to a configuration receiver of a local load balancer executing in the hypervisor for configuring the local load balancer to perform load balancing for the VIP at the hypervisor using the load balancer configuration.Type: GrantFiled: March 17, 2023Date of Patent: November 26, 2024Assignee: VMware LLCInventors: DongPing Chen, Jingchun Jiang, Bo Lin, Xinyang Liu, Donghai Han, Xiao Liang, Yi Zeng
-
Patent number: 12147823Abstract: An apparatus for providing a safety-critical operating environment, comprising a host circuit having a processor and a memory containing instructions configuring the processor to operate a first partition within a virtual environment, by instantiating a hypervisor, generating a virtualization layer supervised by the hypervisor, and operating the first partition in the virtual environment using the virtualization layer, receive a configuration request containing a configuration request from the first partition, create a second partition within the virtual environment based on the configuration request by allocating processor time and a memory space for the second partition using the hypervisor based on the a partition policy, integrate a software module into the virtual environment by instantiating, within the second partition, a software image into a container having a non-preemptable container runtime, and verify a compliance of the integrated software module at the first partition.Type: GrantFiled: December 22, 2023Date of Patent: November 19, 2024Assignee: Parry Labs, LLCInventors: David Walsh, Charles Adams
-
Patent number: 12149408Abstract: According to one embodiment, a method, computer system, and computer program product for managing application deployment among edge devices is provided. The embodiment may include identifying respective computing characteristics of all edge devices of a network. The embodiment may include categorizing the edge devices into one or more categories based on identified respective computing characteristics. The embodiment may include classifying a type of a computing task to be deployed to one or more of the edge devices. The embodiment may include mapping the computing task to a category of the one or more categories. The embodiment may include calculating a respective computing score for each edge device of the category. The embodiment may include ranking edge devices of the category based on their respective computing scores. The embodiment may include deploying the computing task to a top-ranked edge device of the category.Type: GrantFiled: February 22, 2023Date of Patent: November 19, 2024Assignee: International Business Machines CorporationInventors: Su Liu, John A Walicki, Neil Delima, David Jason Hunt
-
Patent number: 12147890Abstract: A neural network computing device and a cache management method thereof are provided. The neural network computing device includes a computing circuit, a cache circuit and a main memory. The computing circuit performs a neural network calculation including a first layer calculation and a second layer calculation. After the computing circuit completes the first layer calculation and generates a first calculation result required for the second layer calculation, the cache circuit retains the first calculation result in the cache circuit until the second layer calculation is completed. After the second layer calculation is completed, the cache circuit invalidates the first calculation result retained in the cache circuit to prevent the first calculation result from being written into the main memory.Type: GrantFiled: August 11, 2020Date of Patent: November 19, 2024Assignee: GlenFly Technology Co., Ltd.Inventors: Deming Gu, Wei Zhang, Yuanfeng Wang, Guixiang He
-
Patent number: 12141648Abstract: The disclosure includes a fixed retail scanner including a data reader, comprising a main board including one or more processors including a system processor, one or more camera modules, and an artificial intelligence (AI). The system processor is configured to transmit image data received from the one or more camera modules responsive to one or more event triggers detected by the system processor, and wherein the AI accelerator is configured to perform analysis based on an AI engine local to the AI accelerator in response to the event trigger. A remote server may also be operably coupled to the fixed retail scanner through the multi-port network switch, the remote server having a remote AI engine stored therein, wherein the local AI engine within the fixed retail scanner is a simplified AI model relative to the remote AI engine within the remote server.Type: GrantFiled: December 21, 2022Date of Patent: November 12, 2024Assignee: Datalogic IP Tech S.R.L.Inventors: Brett Howard, Aric Zandhuisen, Stefano Santi, Matt Monte, Keith Rogers, Alexander McQueen, Alan Guess
-
Patent number: 12141572Abstract: Embodiments relate to a method, a device, and a computer program product for upgrading a virtual system. The method includes: monitoring usage of system resources by the virtual system to acquire resource usage data indicating a system resource usage state of the virtual system, the virtual system using cloud services that provide the system resources. The method further includes: in response to acquiring attribute sets of a set of candidate cloud services, determining, based on the resource usage data, whether the virtual system needs to be upgraded; in response to determining that the virtual system needs to be upgraded, determining, based on the attribute sets, that the virtual system can be upgraded; and recommending, in response to determining that the virtual system can be upgraded, a candidate cloud service from the set of candidate cloud services based on the resource usage data and the attribute sets.Type: GrantFiled: July 27, 2022Date of Patent: November 12, 2024Assignee: DELL PRODUCTS L.P.Inventors: Simin Wang, Bing Liu