Patents Issued in February 1, 2024
-
Publication number: 20240036898Abstract: Some embodiments of the invention provide a method for offloading one or more data message processing services from a machine executing on a host computer. The method is performed at a virtual network interface card (VNIC) that executes on the host computer and is connected to the machine. The method receives, through a communications channel between the machine and the VNIC, (1) configuration data associated with processing data messages belonging to a particular data message flow associated with the machine, and (2) a set of service rules defined for the particular data message flow. The method determines that a first data message received at the VNIC belongs to the particular data message flow and matches at least one service rule in the set of service rules. The method performs, on the first data message, a service specified by the at least one service rule.Type: ApplicationFiled: July 28, 2022Publication date: February 1, 2024Inventors: Peng Li, Guolin Yang, Ronak Doshi, Boon Seong Ang, Wenyi Jiang
-
Publication number: 20240036899Abstract: A first device may include a pod and a processor. The processor may be configured to: receive a request, from a second device, to transfer a content item to the second device; and determine whether the content item can be transferred from the pod to the second device using a content caching container (CCC). When the processor determines that the content item cannot be transferred from the pod, the processor may be further configured to: send a reply, to the second device, indicating that the content item cannot be transferred from the pod to the second device; and enable caching the content item at the pod. When the processor determines that the content item can be transferred from the pod, the processor may be further configured to transfer the content item via a first CCC pm the pod to the second device.Type: ApplicationFiled: July 28, 2022Publication date: February 1, 2024Inventors: Jeemil Shah, Vijayan D. Nambiar, Musa Kazim Guven
-
Publication number: 20240036900Abstract: A system for optimizing anomaly detection determines, based on a confidence score, user clustering information that indicates a cluster to which a user belongs, such that if the confidence score is more than a threshold score, the user clustering information indicates that the user belongs to a first cluster. Otherwise, the user clustering information indicates that the user belongs to a second cluster. The system determines, based on user activities in a virtual environment, user outlier information that indicates whether the user is associated with an unexpected activity. The system determines virtual resource routing information that comprises routings of virtual resources between the avatar and the other avatars within the virtual environment. The system updates the confidence score based at least in part upon at least one of the user clustering information, the user outlier information, or the virtual resource routing information.Type: ApplicationFiled: July 29, 2022Publication date: February 1, 2024Inventors: Rama Krishnam Raju Rudraraju, Om Purushotham Akarapu
-
Publication number: 20240036901Abstract: Implementations of the disclosure provide a method including calculating, by a processing device, a time required to have a container image ready for use; determining whether the time satisfies a threshold criterion; and responsive to determining that the time satisfies the threshold criterion, performing a synchronization operation that stores the container image in a persistent storage.Type: ApplicationFiled: July 26, 2022Publication date: February 1, 2024Inventor: Giuseppe Scrivano
-
Publication number: 20240036902Abstract: According to examples, an apparatus may include a processor that may send, to a measurements manager (MM), a first measurement for the processor, cause a hardware and/or a software to send a second measurement to the MM, and cause a virtual machine (VM) to send a third measurement to the MM. The processor may also cause the MM to accumulate the first measurement, the second measurement, and the third measurement and cause the MM to output the accumulated measurements from the MM for attestation of the processor, the hardware and/or the software, the VM, or a combination thereof.Type: ApplicationFiled: July 26, 2022Publication date: February 1, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Prashant DEWAN, Abhilasha BHARGAV-SPANTZEL
-
Publication number: 20240036903Abstract: The present disclosure provides new and innovative systems and methods for managing nodes using polymorphic unikernals. In an example, a method includes generating, by a polymorphic unikernal service (PUS) system having a processor, a virtual machine. A generic unikernal may be created, retrieved, and/or embedded within the virtual machine. The PUS system may receive, from an Internet of Things (IoT) device (e.g., one of a plurality of nodes communicatively linked to the PUS system), a configuration file indicating a configuration of the IoT device. The PUS system may modify the generic unikernal to generate a modified unikernal based on the configuration of the IoT device. Furthermore, the PUS sytem may deploy the virtual machine on the IoT device. The deployed virtual machine may be embedded with the modified unikernal.Type: ApplicationFiled: July 28, 2022Publication date: February 1, 2024Inventors: Luigi Mario Zuccarelli, Leigh Griffin
-
Publication number: 20240036904Abstract: Some embodiments of the invention provide a method for offloading one or more data message processing services from a machine executing on a host computer. The method is performed at a virtual network interface card (VNIC) that executes within a set of virtualization software executing on the host computer and that is connected to the machine. The method uses a set of configuration data received from the machine to perform the set of data message processing services for a first set of data messages belonging to a particular data message flow associated with the machine. The method determines that a physical network interface card (PNIC) connected to the host computer is available to perform the set of data message processing services for a subsequent second set of data messages belonging to the particular data message flow. The method directs the PNIC to perform the set of data message processing services for subsequent data messages belonging to the particular data message flow.Type: ApplicationFiled: July 28, 2022Publication date: February 1, 2024Inventors: Peng Li, Guolin Yang, Ronak Doshi, Boon Seong Ang, Wenyi Jiang
-
Publication number: 20240036905Abstract: A migration controller of a network service management apparatus requests, upon detection of the necessity of replacing redundant hardware running on a network system with another hardware, an orchestration unit to disable the redundant hardware. Further, issuing a request to delete a virtual machine on the redundant hardware to a virtualized infrastructure manager causes the network system to perform a failure recovery process to redeploy the virtual machine to hardware after replacement.Type: ApplicationFiled: February 12, 2021Publication date: February 1, 2024Applicant: Rakuten Symphony Singapore Pte. Ltd.Inventors: Bharath RATHINAM, Sheshan DE ZOYSA, Rahul ATRI
-
Publication number: 20240036906Abstract: A web application server 10 receives measurement data that is collected by a device, setting a first virtual device corresponding to the measurement data, and sets, when a calculation apparatus that performs a predetermined calculation receives the measurement data and when acquiring first calculated data generated from the measurement data, a second virtual device corresponding to the first calculated data.Type: ApplicationFiled: July 25, 2023Publication date: February 1, 2024Inventors: Hiroo URABE, Yusaku YOSHIDA
-
Publication number: 20240036907Abstract: A method of executing an application, performed by an electronic device including a host system and a virtual machine system, includes: obtaining, through the host system, a first execution request for a first guest application installed in the virtual machine system; transmitting, from the host system to the virtual machine system, the first execution request and a generation request for a first virtual display corresponding to the first guest application; outputting a first execution result of the first guest application on the first virtual display that is generated based on the generation request; transmitting, from the virtual machine system to the host system, a first captured image of the first virtual display; and outputting, through the host system, the first captured image on a display of the electronic device.Type: ApplicationFiled: July 28, 2023Publication date: February 1, 2024Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventor: Jinwoo Shin
-
Publication number: 20240036908Abstract: A method for supporting memory deduplication for unikernel images includes aligning, by a memory aligner entity, memory pages of unikernel images such that a consistent memory alignment is generated across the unikernel images. A memory deduplication identifier entity generates a unique page identifier for a plurality of memory pages of the unikernel images. The memory deduplication identifier entity matches page identifiers of memory pages for a unikernel image, which is to be loaded into a physical memory, with page identifiers of memory pages that have already been loaded into the physical memory and providing matching information about the matching to a page merger entity. The page merger entity performs page merging based on the matching information provided by the memory deduplication identifier entity.Type: ApplicationFiled: April 21, 2021Publication date: February 1, 2024Inventors: Felipe HUICI, Giuseppe SIRACUSANO, Davide SANVITO
-
Publication number: 20240036909Abstract: The present application discloses a method for modifying an internal configuration of a virtual machine, a system and a device, wherein the method is applied to a virtual machine installed with a proxy service therein, and the proxy service is configured for, after the proxy service itself is started up, sending a datum request to a preset IP address via a virtual network card corresponding to the virtual machine. The method includes, when there is a target virtual network card sending a datum request to the preset IP address, according to a predetermined corresponding relation between virtual network cards and virtual machines, determining a target virtual machine corresponding to the target virtual network card; from a database, obtaining target configuration data corresponding to the target virtual machine.Type: ApplicationFiled: January 26, 2022Publication date: February 1, 2024Inventors: Yan XIE, Weifeng LIU, Xuliang GUO
-
Publication number: 20240036910Abstract: The current document is directed to a meta-level management system (“MMS”) that aggregates information and functionalities provided by multiple underlying management systems in addition to providing additional information and management functionalities. In one implementation, the MMS creates and maintains a single inventory-and-configuration-management database (“ICMDB”), implemented using a graph database, to store a comprehensive inventory of managed entities known to, and managed by, the multiple underlying management systems. Each managed entity is associated with an entity identifier and is represented in the ICMBD by a node. Managed entities that are managed by two or more of the multiple underlying management systems are represented by nodes that include references to one or more namespaces.Type: ApplicationFiled: May 17, 2023Publication date: February 1, 2024Applicant: VMware, Inc.Inventors: Nicholas Mark Grant Stephen, Santoshkumar Kavadimatti, Saurabh Kedia
-
Publication number: 20240036911Abstract: Disclosed are a computing device and an operating method thereof. A method of operating a computing system including a host system and a virtual machine system includes: receiving, by the virtual machine system, a request for executing a virtual machine application, by a CPU scheduler of a virtual machine system, scheduling the virtual machine application for which execution is requested to be primarily executed, providing a result of the scheduling to a kernel, and transmitting a request for confirming a resources preemption right for the virtual machine application to a CPU scheduler of the host system, and by the CPU scheduler of the host system, determining a resources preemption right of the virtual machine application by referring to a host scheduling list, and providing information about the determined resources preemption right to the CPU scheduler of a virtual machine system.Type: ApplicationFiled: July 11, 2023Publication date: February 1, 2024Inventors: Backki KIM, Bongwon SEO
-
Publication number: 20240036912Abstract: A system for facilitating workload portability is provided. The system includes a target server instantiated at a target platform and configured to store, at the target platform, one or more snapshots of a workload executing at a source platform. The snapshots are captured at each defined time-interval and correspond to an incremental change in the workload in raw state. The stored snapshots include boot files and data files. The target server updates, based on a trigger pertaining to workload portability of the workload, the boot files and the data files with configuration supported by the target platform. The target server executes the updated boot files and data files at the target platform. The execution of the workload at the target platform is identical to the execution thereof at the source platform.Type: ApplicationFiled: July 27, 2023Publication date: February 1, 2024Inventors: Yogesh Anyapanawar, Sourav Kumar Patjoshi, Rishabh Kemni
-
METHODS AND SYSTEMS FOR AUTOMATING DEPLOYMENT OF APPLICATIONS IN A MULTI-TENANT DATABASE ENVIRONMENT
Publication number: 20240036913Abstract: In accordance with embodiments disclosed herein, there are provided mechanisms and methods for automating deployment of applications in a multi-tenant database environment. For example, in one embodiment, mechanisms include managing a plurality of machines operating as a machine farm within a datacenter by executing an agent provisioning script at a control hub, instructing the plurality of machines to download and instantiate a lightweight agent; pushing a plurality of URL (Uniform Resource Locator) references from the control hub to the instantiated lightweight agent on each of the plurality of machines specifying one or more applications to be provisioned and one or more dependencies for each of the applications; and loading, via the lightweight agent at each of the plurality of machines, the one or more applications and the one or more dependencies for each of the one or more applications into memory of each respective machine.Type: ApplicationFiled: October 13, 2023Publication date: February 1, 2024Applicant: Salesforce, Inc.Inventors: Pallav Kothari, Phillip Oliver Metting van Rijn -
Publication number: 20240036914Abstract: In some embodiments, a method includes receiving zonal topology information related to a zonal topology of a plurality of zones; utilizing the zonal topology information to perform a level strength assessment of each level of a plurality of levels associated with the zonal topology of the plurality of zones; and based on the level strength assessment of each level of the plurality of levels, scaling a target number of resources to at least a first level of the plurality of levels of the zonal topology. In some embodiments of the method, the level strength assessment includes performing a level-by-level breadth analysis of each level of the plurality of levels of the zonal topology.Type: ApplicationFiled: August 1, 2022Publication date: February 1, 2024Applicant: Visa International Service AssociationInventor: Varadharajan Raghavendran
-
Publication number: 20240036915Abstract: Apparatuses, systems, and techniques to execute CUDA programs. In at least one embodiment, an application programming interface is performed to determine a scheduling policy of one or more blocks of one or more threads.Type: ApplicationFiled: September 28, 2022Publication date: February 1, 2024Inventors: Ze Long, Kyrylo Perelygin, Harold Carter Edwards, Gokul Ramaswamy Hirisave Chandra Shekhara, Jaydeep Marathe, Ronny Meir Krashinsky, Girish Bhaskarrao Bharambe
-
Publication number: 20240036916Abstract: Apparatuses, systems, and techniques to execute CUDA programs. In at least one embodiment, an application programming interface is performed to indicate a maximum number of blocks of threads capable of being scheduled in parallel.Type: ApplicationFiled: September 28, 2022Publication date: February 1, 2024Inventors: Ze Long, Kyrylo Perelygin, Harold Carter Edwards, Gokul Ramaswamy Hirisave Chandra Shekhara, Jaydeep Marathe, Ronny Meir Krashinsky, Girish Bhaskarrao Bharambe
-
Publication number: 20240036917Abstract: Apparatuses, systems, and techniques to execute CUDA programs. In at least one embodiment, an application programming interface is performed to indicate a maximum number of blocks of threads to be scheduled in parallel.Type: ApplicationFiled: September 28, 2022Publication date: February 1, 2024Inventors: Ze Long, Kyrylo Perelygin, Harold Carter Edwards, Gokul Ramaswamy Hirisave Chandra Shekhara, Jaydeep Marathe, Ronny Meir Krashinsky, Girish Bhaskarrao Bharambe
-
Publication number: 20240036918Abstract: Apparatuses, systems, and techniques to execute CUDA programs. In at least one embodiment, an application programming interface is performed to cause a kernel to be generated to cause two or more blocks of two or more threads to be scheduled in parallel.Type: ApplicationFiled: September 28, 2022Publication date: February 1, 2024Inventors: Ze Long, Kyrylo Perelygin, Harold Carter Edwards, Gokul Ramaswamy Hirisave Chandra Shekhara, Jaydeep Marathe, Ronny Meir Krashinsky, Girish Bhaskarrao Bharambe
-
Publication number: 20240036919Abstract: A method and processor comprising a command processing unit to receive, from a host processor, a sequence of commands to be executed; and generate based on the sequence of commands a plurality of tasks. The processor also comprises a plurality of compute units each having a first processing module for executing tasks of a first task type, a second processing module for executing tasks of a second task type, different from the first task type, and a local cache shared by at least the first processing module and the second processing module. The command processing unit issues the plurality of tasks to at least one of the plurality of compute units, and wherein at least one of the plurality of compute units is to process at least one of the plurality of tasks.Type: ApplicationFiled: July 26, 2023Publication date: February 1, 2024Applicant: Arm LimitedInventors: Alexander Eugene Chalfin, John Wakefield Brothers, III, Rune Holm, Samuel James Edward Martin
-
Publication number: 20240036920Abstract: Present disclosure generally relates to resource scheduling systems, more particularly relates to a method and a system for priority-based resource scheduling with load balancing. A method includes receiving, from client devices, client requests to execute tasks on servers associated with one or more Virtual Machines (VMs). Further, method includes determining usage-related information of each server, upon receiving client requests. Furthermore, method includes prioritizing received client requests, based on request parameters associated with client requests. Further, method includes assigning computing resources in servers to execute tasks for client devices using dynamic programming technique. The method includes monitoring dynamically, usage-related information of servers. Further, method includes migrating from first server to second server of servers, tasks using round-robin technique and/or graph theory technique, based on usage-related information.Type: ApplicationFiled: July 28, 2023Publication date: February 1, 2024Inventor: KAMALRAJ CHANDRASEKARAN
-
Publication number: 20240036921Abstract: Methods, systems, and apparatuses for graph stream processing are disclosed. One apparatus includes a cascade of graph streaming processors, wherein each of the graph streaming processor includes a processor array, and a graph streaming processor scheduler. The cascade of graph streaming processors further includes a plurality of shared command buffers, wherein each shared command buffer includes a buffer address, a write pointer, and a read pointer, wherein for each of the plurality of shared command buffers a graph streaming processor writes commands to the shared command buffer as indicated by the write pointer of the shared command buffer and the graph streaming processor reads commands from the shared command buffer as indicated by the read pointer, wherein at least one graph streaming processor scheduler operates to manage the write pointer and the read pointer to avoid overwriting unused commands of the shared command buffer.Type: ApplicationFiled: October 16, 2023Publication date: February 1, 2024Applicant: Blaize, Inc.Inventors: Venkata Ganapathi Puppala, Sarvendra Govindammagari, Lokesh Agarwal, Satyaki Koneru
-
Publication number: 20240036922Abstract: Method for distributed asynchronous dependency-based content processing includes: receiving a multi-media digital content and a task to be performed on the multi-media digital content; determining types of the multi-media digital content; generating a workflow graph for each type of the content, each workflow graph including data dependency conditions of the task; generating a task message for each workflow graph; broadcasting the task messages to a respective task queue of an initial state of a workflow manager; responding to the broadcast task messages by a respective processing node; processing the task based on the respective workflow graph in the broadcast task message, and including a result of the processing for each workflow graph in a result message; broadcasting the result messages by the respective processing node; accumulating the broadcast result messages responsive to respective workflow graphs; and outputting the accumulated results to a user or an external system.Type: ApplicationFiled: August 1, 2022Publication date: February 1, 2024Inventor: Jonathan Wintrode
-
Publication number: 20240036923Abstract: Aspects of the present disclosure relate to an apparatus comprising a plurality of processing elements having a spatial layout, and control circuitry to assign workloads to said plurality of processing elements. The control circuitry is configured to, based on a timing parameter, determine one or more active processing elements to deactivate; determine, based on the spatial layout, one or more inactive processing elements to activate; and deactivate said one or more active processing elements and activate said one or more inactive processing elements.Type: ApplicationFiled: July 27, 2022Publication date: February 1, 2024Inventors: Rishav ROY, Supreet JELOKA, Shidhartha DAS, Rahul MATHUR
-
Publication number: 20240036924Abstract: A method for managing cloud resource dependencies is described. The method may include receiving a resource configuration of a first resource. The method may include identifying a dependency of a first stage of a first resource on a second resource and performing a topological sort of a plurality of resources, based at least in part on the dependency of the first stage of the first resource. The method may include constructing a dependency graph including the plurality of resources, including the first stage of the first resource in a subordinate rank and the second resource in a superior rank, corresponding to the topological sort. The method may include generating an execution queue including the second resource in a priority execution position in the execution queue. The method may include executing the plurality of resources according to the execution queue.Type: ApplicationFiled: October 12, 2023Publication date: February 1, 2024Applicant: Oracle International CorporationInventors: Abishek Murali Mohan, Alaa Shaker
-
Publication number: 20240036925Abstract: A Logically Composed System (LCS) Smart Data Accelerator Interface (SDXI) data plane configuration system includes a resource management system coupled to an orchestrator device that is coupled to a plurality of resource devices. The resource management system discovers a first SDXI node in the plurality of resource devices, with the first SDXI node configured to process SDXI information and including a first memory subsystem that is configured to provide an SDXI memory space. The resource management system also identifies first memory system capabilit(ies) of the first memory subsystem included in the first SDXI node and, when the resource management system subsequently receives an LCS request, it composes an LCS that includes an SDXI data plane provided by the first SDXI node based on capabilities requirement(s) identified in the LCS request being satisfied by the first memory subsystem capabilit(ies) of the first memory subsystem included in the first SDXI node.Type: ApplicationFiled: July 28, 2022Publication date: February 1, 2024Inventors: Shyamkumar T. Iyer, Srinivas Giri Raju Gowda
-
Publication number: 20240036926Abstract: Provided is a resource allocation method, an electronic device and a storage medium, relating to the field of computer technology, and in particular to fields of resource management, task allocation and the like in computer technology. The resource allocation method includes: creating a pod for a target task; acquiring Graphics Processing Unit (GPU) resource requirement information of the target task; acquiring available node information of a target cluster and available GPU resource information of the target cluster; and allocating, based on the available node information and the available GPU resource information, first and second target nodes satisfying the GPU resource requirement information to the pod, where the first target node is a node where a target GPU resource allocated to the pod is located, and the second target node is a node where the pod allocated to the pod is located.Type: ApplicationFiled: March 1, 2023Publication date: February 1, 2024Inventor: Yeda Fan
-
Publication number: 20240036927Abstract: A method for processing a workflow in combination with a task, a task processing method, and related devices are provided. During reporting the workflow, a task control is created in a preset area, in response to a task addition instruction. A target task to be added may be inserted into the task control for display. In this way, the workflow added with the target task can be reported to a target user.Type: ApplicationFiled: July 21, 2023Publication date: February 1, 2024Inventors: Jiangfeng MA, Wei HUANG, Xinlei GUO, Qi WEN, Shengxin QIU
-
Publication number: 20240036928Abstract: Embodiments described herein reduce resource insufficiency of a resource source despite inconsistent resource accumulation at the resource source. For example, a request frequency may be determined to define times at which the resource source is predicted to be sufficient despite the inconsistent accumulation or influx. In one use case, with respect to a distributed computing environment having computing resource source(s)/pool(s), a requesting system may identify a machine learning model trained to generate predictions for a resource source at which inconsistent resource accumulation occurs. The system may obtain accumulation data that describes accumulation events at which resources were made available at the resource source.Type: ApplicationFiled: July 26, 2022Publication date: February 1, 2024Applicant: Capital One Services, LLCInventors: Abhay DONTHI, Tania CRUZ MORALES, Jason ZWIERZYNSKI, Joshua EDWARDS, Jennifer KWOK, Sara Rose BRODSKY
-
Publication number: 20240036929Abstract: Computing systems, for example, multi-tenant systems deploy software artifacts in datacenters created in a cloud platform. The system receives multiple version maps. Each version map provides version information for a particular context associated with the datacenter. The context may specify a target environment, a target datacenter entity, or a target action to be performed on the cloud platform. The system generates an aggregate pipeline comprising a hierarchy of pipelines. The system generates an aggregate version map associating datacenter entities of the datacenter with versions of software artifacts targeted for deployment on the datacenter entities and versions of pipelines. The system executes the aggregate pipeline in conjunction with the aggregate version map to perform requested operations on the datacenter configured on the cloud platform, for example, provisioning resources or deploying services.Type: ApplicationFiled: July 29, 2022Publication date: February 1, 2024Inventors: Christopher Steven Moyes, Zemann Phoesop Sheen, Srinivas Dhruvakumar, Mayakrishnan Chakkarapani
-
Publication number: 20240036930Abstract: Various example embodiments relate to methods, devices, and/or systems for executing a plurality of tasks in a computer operating environment. The method comprises monitoring a remaining execution time of a current task executing on the processing circuitry, and performing, by the processing circuitry, pre-emption of the current task based on the remaining execution time of the current task and a desired execution threshold time, the desired execution threshold time being one of a desired value based on one or more configuration parameter of the current task or a dynamic value determined based on information related to ongoing activities in the computer operating environment.Type: ApplicationFiled: July 25, 2023Publication date: February 1, 2024Applicant: Samsung Electronics Co., Ltd.Inventors: Sudharshan Rao B, Tushar VRIND, Venkata Raju INDUKURI
-
Publication number: 20240036931Abstract: A method for managing the transfer of a live containerized stateful process automation application from a source node to a target node of a process control system includes obtaining data relating to execution of the application at the source node and deriving from the data an application execution profile; obtaining an evaluation of available computing resources at the target node; determining feasibility of the transfer by comparing the available computing resources to the application execution profile; and in response to the transfer being determined to be feasible, initiating the transfer of the application from the source node to the target node.Type: ApplicationFiled: October 13, 2023Publication date: February 1, 2024Applicant: ABB Schweiz AGInventors: Pablo Rodriguez, Heiko Koziolek, Andreas Burger, Julius Rueckert
-
Publication number: 20240036932Abstract: Disclosed herein is a graphics processor that comprises a programmable execution unit operable to execute programs to perform graphics processing operations. The graphics processor further comprises a dedicated machine learning processing circuit operable to perform processing operations for machine learning processing tasks. The machine learning processing circuit is in communication with the programmable execution unit internally to the graphics processor. In this way, the graphics processor can be configured such that machine learning processing tasks can be performed by the programmable execution unit, the machine learning processing circuit, or a combination of both, with the different units being able to message each other accordingly to control the processing.Type: ApplicationFiled: July 26, 2023Publication date: February 1, 2024Applicant: Arm LimitedInventors: Daren Croxford, Sharjeel Saeed, Isidoros Sideris
-
Publication number: 20240036933Abstract: A system includes a subsystem, a database, a memory, and a processor. The subsystem includes a computational resource associated with a resource usage and having a capacity corresponding to a maximum resource usage value. The database stores training data that includes historical resource usages and historical events. The memory stores a machine learning algorithm that is trained, based on the training data, to predict, based on the occurrence of an event, that a future value of the resource usage at a future time will be greater than the maximum value. The processor detects that the event has occurred. In response, the processor applies the machine learning algorithm to predict that the future value of the resource usage will be greater than the maximum value. Prior to the future time, the processor increases the capacity of the computational resource to accommodate the future value of the resource usage.Type: ApplicationFiled: September 28, 2023Publication date: February 1, 2024Inventor: Naga Vamsi Krishna Akkapeddi
-
Publication number: 20240036934Abstract: Techniques discussed herein relate to provisioning one or more virtual cloud-computing edge devices at a physical cloud-computing edge device. A manifest may be generated/utilized to specify various attributes of the virtual cloud-computing edge devices to be executed at a physical cloud-computing edge device. A first set of resources corresponding to a first virtual cloud-computing edge device may be obtained from memory of a centralized cloud-environment and provisioned at the first virtual cloud-computing edge device. Similar operations may be performed with respect to a second virtual cloud-computing edge device. The techniques described herein split the physical edge device into multiple virtual device resources that can be utilized in combination or separately to extend the functionality and versatility of the physical edge device.Type: ApplicationFiled: July 27, 2022Publication date: February 1, 2024Applicant: Oracle International CorporationInventors: Naren Shivashankar Vasanad, Pradeep Kumar Vijay
-
Publication number: 20240036935Abstract: An LCS SDXI resource ownership system includes a resource system having an orchestrator device coupled to resource devices and a resource management system. An SDXI controller subsystem is provided by the resource management system and/or the orchestrator device, and operates to use the first resource system to provide an LCS with an SDXI data plane provided by an SDXI node included in the resource devices, and create an SDXI configuration space for the LCS. The SDXI controller subsystem then receives a unique LCS identifier from the LCS via the SDXI configuration space, and links the SDXI node to the LCS in an SDXI resource database using the unique LCS identifier. The SDXI controller subsystem then migrates the LCS to a second resource system, and the LCS performs operations using the SDXI node following migration to the second resource system based on the linking of the LCS to the SDXI node.Type: ApplicationFiled: July 28, 2022Publication date: February 1, 2024Inventors: Srinivas Giri Raju Gowda, Shyamkumar T. Iyer
-
Publication number: 20240036936Abstract: A system and method for providing cloud virtualization (SV) is disclosed. According to one embodiment, a system includes a transactional cloud manager and a compute cluster connected to the transactional cloud manager. The compute cluster includes a system monitor and a control manager in a host. A virtual machine runs on the host, wherein the virtual machine has a VM system monitor and a VM control manager. The transactional cloud manager creates virtual machine clusters on the host.Type: ApplicationFiled: October 6, 2023Publication date: February 1, 2024Inventor: Sreekumar Nair
-
Publication number: 20240036937Abstract: Disclosed are aspects of workload selection and placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. In some aspects, workloads are assigned to virtual graphics processing unit (vGPU)-enabled graphics processing units (GPUs). A number of vGPU placement neural networks are trained to maximize a composite efficiency metric based on workload data and GPU data for the plurality of vGPU placement models. A combined neural network selector is generated using the vGPU placement neural networks, and utilized to assign a workload to a vGPU-enabled GPU.Type: ApplicationFiled: October 9, 2023Publication date: February 1, 2024Inventors: Hari Sivaraman, Uday Pundalik Kurkure, Lan Vu
-
Publication number: 20240036938Abstract: Systems and methods are provided for a modular switch system that comprises disaggregated components, plugins, and managers that enable flexibility to adjust the dynamic configuration of a switch system. This can create modularity and customizability at different times of the lifecycle of the currently configured switch system.Type: ApplicationFiled: July 28, 2022Publication date: February 1, 2024Inventors: DEJAN S. MILOJICIC, DUNCAN ROWETH, DEREK SCHUMACHER
-
Publication number: 20240036939Abstract: Disclosed herein are system, method, and computer program product embodiments for performing deterministic execution of background jobs in a load-balanced system. An embodiment operates by receiving, at a work server in a load-balanced system, job submission code from a client connected to the work server, wherein the job submission code performs a background job for the client. The embodiment then executes, at the work server, the job submission code. The execution of the job submission code obtains a name of the work server executing the job submission code, maps the name of the work server to a logical server name, and submits the background job for background processing using a job processing function that executes the background job on the logical server name.Type: ApplicationFiled: February 28, 2023Publication date: February 1, 2024Inventors: Alexander Ocher, Sreenivasulu Gelle
-
Publication number: 20240036940Abstract: Methods, systems, and devices for performing an acceleration process by offloading an operation. The system includes a hardware offloading engine that includes a hardware accelerator for performing the acceleration process. The hardware accelerator has a processor configured to receive a hardware offloading command, the hardware offloading command including an operation code, an input pointer, and an output pointer, in which at least one of the input pointer or the output pointer includes a unified data pointer that includes one or more bits of memory for identifying the source location for the input data or the destination location of the output data, parse the operation code, the input pointer, and the output pointer from the hardware offloading command, retrieve the input data based on the input pointer, and execute an offloaded operation on the input data based on the operation code.Type: ApplicationFiled: October 12, 2023Publication date: February 1, 2024Inventors: Chul LEE, Hui ZHANG, Shan XIAO, Bo LI, Ping ZHOU, Fei LIU
-
Publication number: 20240036941Abstract: A vehicle-mounted computer, which includes physical resources including a processor with a plurality of three or more cores and a physical device having a register, and generates a plurality of three or more virtual devices by allocating the physical resources through time-division, determines whether or not the multiple cores are operating, and if it is determined that one core is not operating, specifies the core that is the migration destination based on the change amount of a register value of the physical device used when migrating the virtual device that was operating on the one core to the other cores, and migrates the virtual device that was operating on the one core.Type: ApplicationFiled: December 6, 2021Publication date: February 1, 2024Inventors: Tadahiro TAKAZAWA, Koji YASUDA
-
Publication number: 20240036942Abstract: An information processing method and apparatus, a device, and a storage medium are provided. The method includes: obtaining description information of a computing power task or description information of a service; and performing a first operation according to the description information of the computing power task or the description information of the service. The first operation includes at least one of the following: determining a first request; and sending the first request; where the first request is used to request computing power requirement information of the computing power task or computing power requirement information of the service; and the first request includes the description information of the computing power task or the description information of the service.Type: ApplicationFiled: June 27, 2023Publication date: February 1, 2024Applicant: VIVO MOBILE COMMUNICATION CO., LTD.Inventors: Huazhang LV, Xiaowan KE, Wei BAO
-
Publication number: 20240036943Abstract: A vehicle system includes a master controller, a plurality of control modules, and a vehicle network communicatively coupling the master controller and the control modules. The master controller is programmed to receive a set of dependencies between a plurality of applications, receive resource limitations for the control modules, and allocate the applications to the control modules based on the dependencies and the resource limitations.Type: ApplicationFiled: July 26, 2022Publication date: February 1, 2024Applicant: Ford Global Technologies, LLCInventors: Kexun Chen, Abdullah Ali Husain
-
Publication number: 20240036944Abstract: Apparatuses, systems, and techniques to execute CUDA programs. In at least one embodiment, an application programming interface is performed to indicate whether one or more threads within two or more blocks of threads have performed a barrier instruction.Type: ApplicationFiled: September 28, 2022Publication date: February 1, 2024Inventors: Ze Long, Kyrylo Perelygin, Harold Carter Edwards, Gokul Ramaswamy Hirisave Chandra Shekhara, Jaydeep Marathe, Ronny Meir Krashinsky, Girish Bhaskarrao Bharambe
-
Publication number: 20240036945Abstract: Apparatuses, systems, and techniques to execute CUDA programs. In at least one embodiment, an application programming interface is performed to cause performance of one or more threads within a group of blocks of threads to stop at least until all threads within the group of blocks have performed a barrier instruction.Type: ApplicationFiled: September 28, 2022Publication date: February 1, 2024Inventors: Ze Long, Kyrylo Perelygin, Harold Carter Edwards, Gokul Ramaswamy Hirisave Chandra Shekhara, Jaydeep Marathe, Ronny Meir Krashinsky, Girish Bhaskarrao Bharambe
-
Publication number: 20240036946Abstract: Embodiments may facilitate event processing for an ABAP platform. A business object data store may include a RAP model, including a behavior definition, for a business object. A framework may automatically transform the behavior definition of the RAP model into a producer event vian event binding and a cloud event standardized format. Information about the producer event may then be passed to an ABAP application associated with a pre-configured destination at an enterprise business technology platform. In some embodiments, a standalone API enterprise hub data store may contain an event specification. An ABAP development tenant of a business technology platform may automatically parse the event specification and translate the parsed information into high-level programming language structures that reflect an event type at runtime. An event consumption model may then be generated based on the event type.Type: ApplicationFiled: July 26, 2022Publication date: February 1, 2024Inventors: Martin MUELLER, Andre PANY, Thomas EHRET, Raphael DIBBERN, Jonas BRAUN, Roland TRAPP, Ihlas BASHA, Nadine BAUMGAERTEL, Vanessa RAU, Silvana STRAUS, Tatjana PFEIFER, Jens ROESSLER, Roman BELOSLUDTSEV, Arne RANTZEN, Jes Sie CHEAH
-
Publication number: 20240036947Abstract: A method and system for using a configuration-based framework for testing an application programming interface (API) are provided. The method includes receiving identification information about one or more APIs to be tested; defining, based on the first information, at least two API endpoints and one or more dependencies to be tested; retrieving an authentication model to be used for accessing the APIs; generating a testing plan based on the API endpoints, the dependencies, and the authentication model; executing a test of the APIs based on the testing plan; and displaying at least one result of the executed test on a graphical user interface (GUI).Type: ApplicationFiled: September 12, 2022Publication date: February 1, 2024Applicant: JPMorgan Chase Bank, N.A.Inventors: Satya GHATTU, Prasad GUNDETI, Yousuf NIZAM