Patents Issued in October 31, 2024
-
USING VIRTUAL MACHINE PRIVILEGE LEVELS TO CONTROL WRITE ACCESS TO KERNEL MEMORY IN A VIRTUAL MACHINE
Publication number: 20240362049Abstract: Write access to kernel memory in a virtual machine (VM) can be controlled using virtual machine privilege levels (VMPLs). In one example, a guest kernel can detect an attempt by a device driver to perform a write operation using a first virtual central processing unit (vCPU) with a first VMPL. The write operation can correspond to a particular kernel memory address for the guest kernel, and the first VMPL may have fewer permissions than a second VMPL. In response to detecting the write operation, the guest kernel can exit to a hypervisor associated with the guest kernel based on the first VMPL. In response, the hypervisor can launch a second vCPU with the second VMPL. The second vCPU can determine that a range of kernel memory for the guest kernel does not comprise the particular kernel memory address. In response, the device driver, using the first vCPU, can execute the write operation.Type: ApplicationFiled: April 25, 2023Publication date: October 31, 2024Inventor: Bandan Das -
Publication number: 20240362050Abstract: The disclosure provides a method for handling heterogeneous input/output (I/O) load for containers running in a virtualized environment. The method generally includes receiving, from an application running in a container, an I/O indicating to write data to a persistent volume backed by a virtual disk file in storage, determining a maximum number of in-flight write I/Os allowed for the persistent volume based on a share of a total write I/O bandwidth assigned to the virtual disk file and allocated to the persistent volume, determining a current number of in-flight write I/Os for the persistent volume, and determining whether the current number of in-flight write I/Os for the persistent volume is greater than or equal to the maximum number of in-flight write I/Os allowed for the persistent volume to determine whether the received I/O is to be rejected or processed.Type: ApplicationFiled: April 25, 2023Publication date: October 31, 2024Inventor: Kashish Bhatia
-
Publication number: 20240362051Abstract: One example method includes creating a virtual machine clone of a physical endpoint device, pushing a change script and a test script to the virtual machine clone, causing the change script to be executed on the virtual machine clone, causing the test script to be executed on the virtual machine clone, and correcting any problems identified by execution of the test script, and pushing the change script to the physical endpoint device, and causing the change script to be executed on the physical endpoint device.Type: ApplicationFiled: April 26, 2023Publication date: October 31, 2024Inventors: Ahmed Elsayed Elshafey, Ahmed Mohamed Hamed Zahran, Sarah Tarek Ebeid AbdelAzeem
-
Publication number: 20240362052Abstract: In one set of embodiments, a computer system can receive a plurality of requests for placing a plurality of clients on a plurality of graphics processing units (GPUs), where each request includes a profile specifying a number of GPU compute slices and a number of GPU memory slices requested by a corresponding client. The computer system can further formulate an integer linear programming (ILP) problem based on the requests and a maximum number of GPU compute and memory slices supported by each GPU. The computer system can then generate a solution for the ILP problem and place the plurality of clients on the plurality of GPUs in accordance with the solution.Type: ApplicationFiled: April 27, 2023Publication date: October 31, 2024Inventors: Uday Pundalik Kurkure, Lan Vu, Hari Sivaraman
-
Publication number: 20240362053Abstract: Techniques are provided for virtual machine hosting and serverless disaster recovery. A virtual machine is hosted by a first hypervisor that may be located on-premise. Snapshots of virtual machine disks of the virtual machine are backed up to a cloud storage environment. The snapshots are used to on-demand host a new instance of the virtual machine within a destination environment such as within the cloud storage environment through a second hypervisor. The new instance of the virtual machine is hosted for various reasons such as part of a disaster recovery operation if the virtual machine fails, load balancing of I/O operations, migration to a different hosting environment (e.g., a cheaper or more performant environment), development testing, etc.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Inventors: Dnyaneshwar Nagorao Pawar, Sumith Makam, Roopesh Chuggani, Vineeth Kumar Chalil Karinta, Tijin George
-
Publication number: 20240362054Abstract: Techniques are provided for virtual machine hosting and disaster recovery across virtual machine hosting environments, such as hypervisors, supporting different virtual machine formats. A virtual machine is hosted by a first hypervisor that supports a first virtual machine format. Snapshots capturing virtual machine disks of the virtual machine are created and backed up to a cloud storage environment. The snapshots are used to restore the virtual machine as a destination virtual machine hosted by a second hypervisor supporting a second virtual machine format different than the first virtual machine format. As part of restoring the destination virtual machine, virtual machine disks of the virtual machine are reformatted according to the second virtual machine format supported by the second hypervisor.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Inventors: Dnyaneshwar Nagorao Pawar, Sumith Makam, Roopesh Chuggani, Vineeth Kumar Chalil Karinta, Tijin George
-
Publication number: 20240362055Abstract: In some implementations, a data materialization platform may perform a data migration process between a core cloud environment and an edge cloud environment. The data materialization platform may identify, in association with the data migration process, attribute values stored in a data repository of the core cloud environment, wherein the attribute values are to be used as inputs for a machine learning model that is to be executed by the edge cloud environment. The data materialization platform may analyze the attribute values to identify ranges, associated with a subset of the attribute values, for which outputs of the machine learning model are estimated to be approximately a same output. The data materialization platform may deduplicating, by the data materialization platform, the subset of the attribute values from the data repository of the core cloud environment to generate deduplicated attribute values that are associated with median range values.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Inventors: Grzegorz Piotr SZCZEPANIK, Kushal S. PATEL, Sarvesh S. PATEL, Lukasz Jakub PALUS
-
Publication number: 20240362056Abstract: Systems and methods for sharing a namespace of an ephemeral storage device by multiple consumers are provided. In an example, an NVMe driver of a virtual storage system deployed within a compute instance of a cloud environment facilitates sharing of the namespace by exposing an API through which the multiple consumers access an ephemeral storage device associated with the compute instance. During initialization processing performed by each consumer, for example, during boot processing of the virtual storage system, the consumers may share the namespace by reserving for their own use respective partitions within the namespace via the API and thereafter restrict their usage of the namespace to their respective partitions, thereby retaining the functionality provided by the multiple consumers when the host on which the compute instance is deployed has fewer ephemeral storage devices than consumers that rely on the availability of vNVRAM backed by ephemeral storage.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Applicant: NetApp, Inc.Inventors: Joseph Brown, JR., Javier Tsuyoshi Takimoto, Sangramsinh Pandurang Pawar, Michael Scott Ryan
-
Publication number: 20240362057Abstract: A system on chip includes a first core cluster configured to execute a first virtual machine loaded into a memory, the first core cluster including a plurality of first cores, and a second core cluster configured to execute a second virtual machine loaded into the memory, the second virtual machine including a plurality of second cores, wherein a first core of the plurality of first cores is configured to execute the first virtual machine at a first exception level (EL) to generate a first temperature value request, execute a hypervisor loaded into the memory at a second exception level different from the first exception level to receive a first temperature value from temperature management circuitry, in response to the first temperature value request, and execute the first virtual machine at the first exception level to check the received first temperature value, and wherein a second core of the plurality of second cores is configured to execute the second virtual machine at the first exception level to geneType: ApplicationFiled: September 18, 2023Publication date: October 31, 2024Applicant: Samsung Electronics Co., Ltd.Inventors: Chung Woo PARK, Bo Youn PARK, Soung Kwan KIMN
-
Publication number: 20240362058Abstract: Various methods, apparatuses/systems, and media for executing virtual collaboration are disclosed. A processor implements a virtual collaboration platform for providing a virtual branch experience to a plurality of users via seamless digitally enabled communication channels by leveraging features across a plurality of vendors; establishes a communication link between the virtual collaboration module and the digitally enabled communication channels via a communication interface; implements a first algorithm in a manner such that the virtual collaboration module is cloud native and supporting both multi-cloud and hybrid cloud models; implements a second algorithm in a manner such that the virtual collaboration module is configured to support an application seamlessly regardless of whether the application is deployed onto a desktop, or a mobile application and support multiple browsers; and executes real-time virtual collaboration including status of application health, metrics, alerts, and resource management.Type: ApplicationFiled: April 25, 2024Publication date: October 31, 2024Applicant: JPMorgan Chase Bank, N.A.Inventors: Shimona SHARMA, Bhavin J SHUKLA, Hemathri BALAKRISHNAN, Prashant SHAH, Sanjay R OZA, Romesh J NISSANGARATCHIE
-
Publication number: 20240362059Abstract: Examples described herein relate to at least one memory comprising instructions stored thereon and at least one processor. In some examples, the at least one processor is to: based on boot of a device, determine a first virtual identifier value for a first virtual function (VF) associated with the device based on a first seed value and assign the first virtual identifier value to the first VF and based on occurrence of a reset, determine the same first virtual identifier value for the first VF based on the first seed value and assign the first virtual identifier value to the first VF.Type: ApplicationFiled: July 12, 2024Publication date: October 31, 2024Inventor: Jesse BRANDEBURG
-
Publication number: 20240362060Abstract: A banking pipeline for processing various transactions for multiple financial instruments is disclosed herein. The pipeline may have three distinct interpreters, a transaction interpreter, a rollup interpreter, and a rules interpreter. Different aspects of the transaction may be performed on separate interpreters and each interpreter may perform its aspect of the transaction before the next interpreter begins.Type: ApplicationFiled: May 3, 2024Publication date: October 31, 2024Inventors: Matthew FELLOWS, Annette CHEN, Leandra IRVINE
-
Publication number: 20240362061Abstract: In an example, a method includes adding a first request from a first requestor to a queue for a shared resource, where the first request has a first priority. The method includes providing the first request to the shared resource from the queue. The method includes processing the first request at the shared resource. The method includes adding a second request from a second requestor to the queue for the shared resource, where the second request has a second priority that is higher than the first priority. The method includes preempting the processing of the first request and notifying the first requestor of the preemption, where notifying the first requestor of the preemption includes providing the first requestor with a duration of availability for the shared resource. The method includes providing the second request to the shared resource from the queue and processing the second request at the shared resource.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Inventors: Peter Wongeun CHUNG, Brian QUACH, Jackson FARLEY, Jonathan NAFZIGER
-
Publication number: 20240362062Abstract: A system may include a memory and a processor in communication with the memory. The processor may be configured to perform operations. The operations may include obtaining social interaction data for a user and monitoring a system for activity of the user. The operations may include analyzing the activity and the social interaction data to obtain an analysis. The operations may include performing statistical linear regression on the activity and the social interaction data to obtain statistical linear regression data. The operations may include deriving a task clustering model based on the analysis and the statistical linear regression data.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Inventors: Diane Chalmers, Kelley Anders
-
Publication number: 20240362063Abstract: A method for performing logging of modifications of a database includes, for each backend process of a plurality of backend processes simultaneously, writing a respective log entry to a write-ahead log buffer, submitting a respective commit request requesting the respective log entry be committed to a write-ahead log, and sleeping the respective backend process. The method also includes writing, using a dedicated writing process and direct asynchronous input/output, one or more of the respective log entries in the write-ahead log buffer to the write-ahead log. The dedicated writing process is different from each respective backend process of the plurality of backend processes. The method also includes updating a log sequence number pointer based on the respective log sequence numbers of the one or more of the respective log entries and waking, based on the log sequence number pointer, one or more of the respective backend processes.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Applicant: Google LLCInventors: Yingjie He, Yi Ding
-
Publication number: 20240362064Abstract: A system, method, and computer-readable medium for performing a data center management and monitoring operation. The data center management and monitoring operation includes: receiving data center asset data for a plurality of data center assets, the data center asset data comprising data associated with servicing a particular workload during a particular interval of time by a particular data center asset; monitoring electrical power consumed by a particular data center asset; and, performing an electrical power consumption analysis operation, the electrical power consumption analysis operation ascertaining an amount of electrical power consumed by the particular data center asset when servicing the particular workload during the particular interval of time. In certain embodiments, the power consumption analysis operation computes carbon emission generated by the particular workload.Type: ApplicationFiled: April 27, 2023Publication date: October 31, 2024Applicant: Dell Products L.P.Inventors: Shivendra Katiyar, Nikhil Sivanandaswami
-
Publication number: 20240362065Abstract: Techniques for controlling resource deployments in a cloud partition of a cloud environment are disclosed. A cloud service provider (CSP) operates the cloud environment where its customers can specify constraints on deployments to their respective partitions (i.e., regions or realms). A partition-specific deployment constraint is a rule that constrains the changes/updates that can be made to one or more specific partitions. A partition-specific deployment constraint applies to at least one partition but may apply to multiple partitions. For example, a partition-specific deployment constraint may apply to one or more regions in a realm. A partition-specific deployment constraint is evaluated at deployment time using the most recent state, or a curated subset thereof, for at least one specific partition. A global deployment orchestrator conditions a deployment, at least in part, on if the deployment satisfies the partition-specific constraint(s) in the target partition.Type: ApplicationFiled: April 26, 2024Publication date: October 31, 2024Applicant: Oracle International CorporationInventors: Jason Bolla, Daniel M. Vogel
-
Publication number: 20240362066Abstract: Systems and method for scheduler architectures that can enable reconfigurable architecture to execute multiple functions in accordance with embodiments of the invention are described.Type: ApplicationFiled: July 20, 2022Publication date: October 31, 2024Applicant: The Regents of the University of CaliforniaInventors: Sumeet Singh Nagi, Dejan Markovic
-
Publication number: 20240362067Abstract: A digital content processing method and a digital content processing apparatus, an electronic device, a storage medium and a product are provided. The digital content processing method includes: acquiring a definition of a one-time processing step and a definition of a one-time processing process; decomposing and orchestrating a digital content processing process according to the acquired definitions; and processing digital content according to the decomposed and orchestrated digital content processing process. The decomposition and orchestration includes merging multiple same one-time processing steps in at least one one-time processing process into one processing node.Type: ApplicationFiled: March 8, 2023Publication date: October 31, 2024Inventors: Zhijie Chen, Feng Zhou
-
Publication number: 20240362068Abstract: Various embodiments of the present disclosure provide methods, apparatus, systems, computing devices, computing entities, and/or the like for improving allocation of limited resources by generating an optimization function for generating an optimal amount of resources to allocate to data objects in each of one or more data object cohorts based on nonlinear causal effect predictions and generating an optimal parameter occurrence set based on the determined optimal amount of resource. Nonlinear causal effects of selected amounts of type-varied resources assigned to specific data objects are predicted on an outcome of interest associated with the data objects.Type: ApplicationFiled: April 26, 2023Publication date: October 31, 2024Inventors: Breanndan O CONCHUIR, Conor John WALDRON, Michael J. MCCARTHY, Kevin A. HEATH
-
Publication number: 20240362069Abstract: Provided in the present application are a resource allocation method and apparatus. The resource allocation method comprises: acquiring resource usage data corresponding to work loads; according to the resource usage data, constructing a long-period resource feature map corresponding to a long-period load type, and a short-period resource feature map corresponding to a short-period load type; according to the short-period resource feature map, allocating, from resources to be allocated, short-period resources for the short-period load type; and according to the long-period resource feature map, allocating, from the short-period resources, long-period resources for the long-period load type.Type: ApplicationFiled: October 8, 2022Publication date: October 31, 2024Inventors: Fansong ZENG, Menghai WANG, Tao LI, Tao HUANG
-
Publication number: 20240362070Abstract: A resource allocation system receives user requests for system resources over time. The system records factors such as number of resource requests, total usage time, etc., and preempts earlier requests in favor of later requests based on those factors. The resource allocation system tracks metrics in real-time and establishes dynamic preemption thresholds based on usage over time. Preemption thresholds may be specific to individual users.Type: ApplicationFiled: April 27, 2023Publication date: October 31, 2024Inventor: Benjamin Peiffer
-
Publication number: 20240362071Abstract: Various embodiments of the present disclosure provide methods, apparatuses, systems, computing devices, computing entities, and/or the like for developing, managing, and executing Python scripts in a containerized, cloud-based (e.g., serverless) multi-tenant manner. Certain embodiments include a Python Studio allowing users to develop and manage scripts and/or a Flex Python system allowing execution of scripts.Type: ApplicationFiled: April 30, 2023Publication date: October 31, 2024Inventor: Julio P. Roque
-
Publication number: 20240362072Abstract: Systems, apparatuses, methods, and computer program products are provided. For example, a method provided herein may include generating a configuration specification. In some embodiments, the configuration specification is associated with a plurality of computing instances. In some embodiments, the method may include receiving a status trigger from one of the plurality of computing instances. In some embodiments, the method may include parsing the configuration specification to determine that the status trigger was received from a first computing instance of the plurality of computing instances. In some embodiments, the method may include parsing the configuration specification to determine a first interface component of a plurality of interface components associated with the plurality of status trigger. In some embodiments, the method may include causing a user interface to be displayed, the user interface comprising the plurality of interface components.Type: ApplicationFiled: April 27, 2023Publication date: October 31, 2024Inventors: Ankit SINGH, Timothy SNEED, Lakshminarayana PAILA
-
Publication number: 20240362073Abstract: A computer implemented method includes monitoring resource utilization for multiple programs running on a user device. A current user interaction with the programs is detected and a usage contextual profile representing user interaction with the programs is derived. The monitored resource utilization is compared to a performance threshold and one of the multiple programs is distributed for execution elsewhere in response to the comparing to optimize user experience on the user device in accordance with the usage contextual profile.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Inventors: Axel Ramirez Flores, Rod D Waltermann, George O Diatzikis
-
Publication number: 20240362074Abstract: Methods and systems for performing workloads are disclosed. The workloads may be distributed across any number of processing elements for performance. The processing elements may be supported by communication elements. The operation of the communication elements may impact the rate at which the processing elements are able to complete the workloads. The operation of the communication elements may be dynamically configured to speed completion of workloads.Type: ApplicationFiled: April 26, 2023Publication date: October 31, 2024Inventors: JOHN A. LOCKMAN, III, DHARMESH M. PATEL
-
Publication number: 20240362075Abstract: An automated system for allocation of resources in a cluster configured to run a search engine is disclosed. At least one master node includes a processing system. The processing system is configured to analyze the cluster based on measurements of different parameters. The results of the analysis can be used to allocate or reallocate the shards, allocate or reconfigure the workload portions assigned to the shards, and allocate or reconfigure the shards selectively to maintain high performance. Periodic analyses can predict future behavior, and reconciliations toward a target allocation can occur regularly to maximize system efficiency and performance.Type: ApplicationFiled: February 6, 2024Publication date: October 31, 2024Applicant: ELASTICSEARCH B.V.Inventors: David C. Turner, Henning Andersen, Ievgen Degtiarenko, Francisco Fernández Castaño
-
Publication number: 20240362076Abstract: The present disclosure relates to a system and method for adjusting control sensitivity based on optimal search. According to the present disclosure, stable control convergence can be achieved without excessively shortening the lifespan of a target device by moving a target device only when there is a control gain larger than or equal to a preset reference value. This occurs during continuous control of the target device through the system for adjusting a control sensitivity based on optimal search, which adjusts the sensitivity of the continuous control based on optimal search.Type: ApplicationFiled: April 12, 2024Publication date: October 31, 2024Inventors: Jwa Young MAENG, Hyun Sik KIM, Jun Woo YOO, Sang Gun NA
-
Publication number: 20240362077Abstract: Methods and systems for performing workloads are disclosed. To perform the workloads, operations may be performed by compute complexes. The compute complexes may perform some types of operations inefficiently. To accelerate completion of the workloads, operations to be performed by the compute complexes may be analyzed by other hardware components in a manner that is transparent to the compute complexes. Operations that may be performed more quickly by the other hardware components may be automatically and transparently offloaded.Type: ApplicationFiled: April 26, 2023Publication date: October 31, 2024Inventors: JOHN A. LOCKMAN, III, DHARMESH M. PATEL
-
Publication number: 20240362078Abstract: A computing device determines that an executing first operator process is to be upgraded to a second operator process in an upgrade process, wherein the first operator process maintains on a cluster of compute nodes a desired identified state of an application. The computing device prior to initiating the second operator process, determines that the upgrade process will cause an initiation of a new container of the application to replace an existing container of the application. The computing device determines that an upgrade mode associated with the first operator process is a rolling upgrade mode, wherein the existing container and the new container will execute concurrently for a period of time. The computing device makes a determination whether computing resources needed to execute the existing container and the new container concurrently are available. The computing device takes an upgrade request action based on the determination.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Inventors: Brian Gallagher, Michael Browne
-
Publication number: 20240362079Abstract: A method and a system for optimizing at least one resource requirement on a distributed processing platform are disclosed. The method includes receiving at least one text file based on a user input and generating an output file based on the at least one text file. Next, the method includes identifying at least one operator in the output file and applying at least one rule to the at least one operator. Next, the method includes scanning the at least one rule that is applied to the at least one operator. Next, the method includes recommending at least one change in the at least one rule. Thereafter, the method includes generating at least one updated output file based on the recommendation of the at least one change in the at least one rule.Type: ApplicationFiled: June 12, 2023Publication date: October 31, 2024Applicant: JPMorgan Chase Bank, N.A.Inventors: Swastik MOHANTY, Nilesh SINHA, Archita ARORA, Hemant ROUT, Srileka VIJAYAKUMAR, Rakesh Kumar KASHYAP
-
Publication number: 20240362080Abstract: A method includes determining, by a processing device, an intended level of power consumption associated with a network function; allocating, in view of the intended level of power consumption, a network device to the network function; allocating, in view of the intended level of power consumption, a processor to the network function; and designating the processor to handle interrupts from the network function via the network device.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Inventors: Huamin Chen, Yuval Lifshitz, Douglas Smithy
-
Publication number: 20240362081Abstract: Techniques are disclosed for using a multi-tenant framework for microservices in a microservices-based application to implement a tenant-aware distributed cache. The microservices-based application can include at least one microservice that incorporates the multi-tenant framework. The multi-tenant framework includes software components configured to provide multi-tenant functionality for the microservice. A microservice may receive first request associated with a tenant and comprising tenant context data. A first software component of the multi-tenant framework can extract the tenant context data from the request. The microservice can receive a second request comprising request data. A second software component of the multi-tenant framework can use the tenant context data to store the request data in a cache.Type: ApplicationFiled: July 10, 2024Publication date: October 31, 2024Applicant: Oracle International CorporationInventors: Arif Iqbal, Dhiraj D. Thakkar, Ananya Chatterjee
-
Publication number: 20240362082Abstract: Example collective communication methods and apparatus are described In one example method, a computing cluster includes a first node and a second node, where the first node includes a first processor and a second processor, the second node includes a third processor, and second processor is connected to the third processor. In this example, the first processor determines that a processor that is in the first node and that is connected to the third processor in the second node is the second processor, and sends first data to the second processor. Correspondingly, the second processor receives the first data from the first processor, and transmits the first data to the third processor in the second node.Type: ApplicationFiled: July 11, 2024Publication date: October 31, 2024Inventors: Tianchi HU, Shengyu SHEN, Wenkai LING
-
Publication number: 20240362083Abstract: An embedded system running method includes: allocating, according to a resource dynamic allocation rule, a group of services to be allocated to corresponding operating systems in an embedded system, wherein the embedded system includes a first operating system and a second operating system, and a response speed of the first operating system is higher than a response speed of the second operating system; determining resource allocation results corresponding to the group of services to be allocated, where the resource allocation results are used for indicating, among processing resources of the processor, a processing resource corresponding to each of the group of services to be allocated; and allocating the processing resources of a processor to the first operating system and the second operating system according to an operating system allocation result and the resource allocation result corresponding to each of the group of services to be allocated.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Applicant: SUZHOU METABRAIN INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Endong WANG, Jiaming HUANG, Baoyang LIU, Chaofan CHEN, Wenkai MA
-
Publication number: 20240362084Abstract: An apparatus to facilitate thread synchronization is disclosed. The apparatus comprises one or more processors to execute a producer thread to generate a plurality of commands, execute a consumer thread to process the plurality of commands and synchronize the producer thread with the consumer thread, including updating a producer fence value upon generation of in-order commands, updating a consumer fence value upon processing of the in-order commands and performing a synchronization operation based on the consumer fence value, wherein the producer fence value and the consumer fence value each correspond to an order position of an in-order command.Type: ApplicationFiled: May 29, 2024Publication date: October 31, 2024Applicant: Intel CorporationInventors: STAV GURTOVOY, MATEUSZ MARIA PRZYBYLSKI, MICHAEL APODACA, MANJUNATH DS
-
Publication number: 20240362085Abstract: An integrated circuit chip includes a secondary controller coupled to a deadlock solver that provides a communication interface to a primary controller. The deadlock solver stores control logic executable to detect a reset event indicator on a secondary controller that is indicative of a reset event pattern executing on the secondary controller while the secondary controller is in a reset state. Based at least in part on detection of the reset event indicator, the deadlock solver transmits a signal pattern to the secondary controller that mimics a transaction from a primary controller. The signal pattern assisting the secondary controller in exiting a reset state.Type: ApplicationFiled: April 27, 2023Publication date: October 31, 2024Inventor: Yaron Baruch SHAPIRO
-
Publication number: 20240362086Abstract: Apparatuses, systems, and techniques to identify a location of one or more portions of incomplete graph code. In at least one embodiment, a location of one or more portions of incomplete graph code is identified based on, for example, CUDA or other parallel computing platform code.Type: ApplicationFiled: June 25, 2024Publication date: October 31, 2024Inventor: David Anthony Fontaine
-
Publication number: 20240362087Abstract: A document management system processes application programming interface (API) requests received from entities. The document management system processes the API requests to perform operations such as modifying a document, executing a document, or sending a set of documents to another entity. The document management system enforces API limits on API requests received from entities and processed by the document management system. The document management system allows an entity to request a modification to an API limit to a target API limit and determines whether to approve the requested modification. The document management system determines whether to approve the requested API limits based on a comparison with other entities that are similar to the entity based on past API requests received from the other entities.Type: ApplicationFiled: July 5, 2024Publication date: October 31, 2024Inventors: Joey Jia Wei Peng, Abhishek Ram Battepati, Timofei Borisovich Bolshakov
-
Publication number: 20240362088Abstract: Apparatuses, systems, and techniques to modify one or more portions of incomplete graph code. In at least one embodiment, one or more portions of incomplete graph code are modified based on, for example, CUDA or other parallel computing platform code.Type: ApplicationFiled: July 9, 2024Publication date: October 31, 2024Inventor: David Anthony Fontaine
-
Publication number: 20240362089Abstract: Methods and systems for managing operation of data processing systems with limited access to an uplink pathway are disclosed. To manage the operation, a system may include a data processing system manager, a data collector, and one or more data processing systems. As the data processing system may not be able to transmit data to the data processing system manager, the data collector may provide observational data related to the operation of the data processing system to the data processing system manager. The data processing system manager may utilize a digital twin to simulate the operation of the data processing system and identify potential future occurrences of events that may impact the operation of the data processing system. The data processing system manager may identify instructions to remediate or avoid the future occurrences of the events and may provide the instructions as commands to the data processing system.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Inventors: OFIR EZRIELEV, TOMER KUSHNIR, MAXIM BALIN
-
Publication number: 20240362090Abstract: A computer-implemented method of providing unified event monitoring and log processing is disclosed. The method comprises receiving streaming event data comprising a plurality of event entries from a plurality of domains including a cloud manager for a cloud platform and an application running within a container on the cloud platform; processing the streaming event data into a normalized, domain-independent format; evaluating a plurality of policy rules on the streaming event data, wherein the plurality of policy rules is defined with a unified syntax; and in response to the evaluating satisfying a condition of a first rule of the plurality of policy rules, transmitting to a remote device data related to an action defined in the first rule, wherein the receiving, processing, evaluating, and transmitting for each event entry for the plurality of event entries are performed in real time.Type: ApplicationFiled: July 5, 2024Publication date: October 31, 2024Inventor: Loris Degioanni
-
Publication number: 20240362091Abstract: Disclosed are a method, apparatus and device for sharing microservice application data. The method includes: managing, through data registration management, memory data registration information that is to be loaded by microservice application clusters; determining, according to the memory data registration information, memory data that are required by the microservice application clusters; partitioning and distributing the memory data to a plurality of memory computation service nodes in the microservice application clusters, and deploying the plurality of memory computation service nodes into a corresponding microservice application cluster at a proximal end; and loading the memory data in a preset manner in the plurality of memory computation service nodes, and sharing a corresponding memory computation service node in real time under the condition that the memory data change.Type: ApplicationFiled: May 16, 2024Publication date: October 31, 2024Applicant: INSPUR GENERSOFT CO., LTD.Inventors: Daisen WEI, Weibo ZHENG, Yucheng LI, Xiangguo ZHOU, Lixin SUN
-
Publication number: 20240362092Abstract: A technique includes enqueueing, by a host processor of a compute node, a stream of first operations to be executed by an accelerator of the compute node. The stream is associated with a compute kernel boundary. The technique includes synchronizing a network operation to the compute kernel boundary; and offloading, by the host processor and to the accelerator, the synchronizing to the accelerator. The offloading includes enqueueing, by the host processor and to a network communication interface of the compute node, a network communication operation to be performed by the network communication interface. The offloading further includes adding, by the host processor and to the stream, a second operation to synchronize the network operation with the compute kernel boundary.Type: ApplicationFiled: April 26, 2023Publication date: October 31, 2024Inventors: Naveen N. Ravichandrasekaran, Krishna C. Kandalla, James B. White, III
-
Publication number: 20240362093Abstract: At least utilizing a custom corpus of documents to condition a large language model (LLM) when generating a response to a user query. In some implementations, a user query associated with a client device is received. An API query for an external application is generated by an LLM based on the user query. The external application has access to a custom corpus of documents comprising a plurality of documents. The external application is queried using the API query. Data representative of one or more documents in the custom corpus of documents is received from the external application in response to the API query. The LLM generates a response to the query that is conditioned on the data representing one or more of the documents in the custom corpus of documents received from the external application. The response to the user query is caused to be rendered on the client device.Type: ApplicationFiled: August 8, 2023Publication date: October 31, 2024Inventors: Hao Zhou, Jamie Hall, Xinying Song, Sahitya Potluri, Yu Du, Heng-Tze Cheng, Quoc Le, Ed H. Chi
-
Publication number: 20240362094Abstract: In accordance with example implementations, a process includes receiving, by a connector that is associated with a compute node and is associated with a fabric-attached memory (FAM), an application programming interface (API) called to perform an operation that is associated with a hierarchical data format (HDF) object of an HDF file. The API call includes a HDF object identifier, which corresponds to the HDF object. The process includes, responsive to the request, based on the HDF object identifier, accessing, by the connector, mapping information that is stored in the FAM; and using, by the connector, the mapping information to identify a FAM descriptor corresponding to a first data item that is stored in the FAM and corresponds to the HDF object. The process includes, responsive to the request, serving, by the connector, the API call responsive to the identification of the FAM descriptor.Type: ApplicationFiled: April 27, 2023Publication date: October 31, 2024Inventors: Chinmay Ghosh, Sharad Singhal, Porno Shome
-
Publication number: 20240362095Abstract: A system, method, and computer-readable medium for performing a data center monitoring and management operation, The data center monitoring and management operation includes receiving data center asset data for a plurality of data center assets, the data center asset data comprising data center asset fan associated data; providing the data center asset fan associated data to a fan failure prediction model; and, training the fan failure prediction model using the data center asset fan associated data.Type: ApplicationFiled: April 28, 2023Publication date: October 31, 2024Applicant: Dell Products L.P.Inventor: Deepak NagarajeGowda
-
Publication number: 20240362096Abstract: A memory stores management information where identification information of each of a plurality of devices used by an information processing apparatus, first positional information, and second positional information are associated with one another. The first positional information indicates a position of a device storage storing the plurality of devices. The second positional information indicates a storage position of each of the plurality of devices in the device storage. A processor receives failed device information including identification information of a failed device among the plurality of devices from the information processing apparatus. The processor identifies the position of the device storage storing the failed device and the storage position of the failed device in the device storage from the identification information of the failed device included in the failed device information on the basis of management information.Type: ApplicationFiled: April 11, 2024Publication date: October 31, 2024Applicant: Fujitsu LimitedInventors: Satoshi KAZAMA, Kazuhiro SUZUKI, Hiroshi ENDO
-
Publication number: 20240362097Abstract: Methods and systems for managing data processing systems are disclosed. A data processing system may include and depend on the operation of hardware and/or software components. To manage the operation of the data processing system, a data processing system manager may obtain logs for components of the data processing system. Inference models may be implemented to predict likely future component failures (e.g., failure sequences) and their associated times-to-failures using information recorded in the logs. The failure sequences may be presented as an acyclic graph that associates component failures, their times-to-failure, and related actions. The probable failure sequences may be analyzed to identify sets of actions. The sets of actions may be optimized based on deviations detected during executing the sets of actions to optimize operational goals (e.g., maximizing system lifetime, minimizing system costs), and/or reduce the likelihood of the data processing system becoming impaired.Type: ApplicationFiled: April 27, 2023Publication date: October 31, 2024Inventors: DEEPAGANESH PAULRAJ, MIN GONG, ASHOK NARAYANAN POTTI, DALE WANG
-
Publication number: 20240362098Abstract: A system and method for determining a blast radius of a major incident occurring in a virtual desktop system is disclosed. The virtual desktop system has interconnected service components and provides access to virtual desktops by client devices. An event collection module collects events from the service components. An aggregation module merges the collected events in a time-ordered stream, provides context to the events in the time-ordered stream through relationships between the collected events, and generates a correlated event stream. An analysis module determines a stream of problem reports from the correlated event stream. The analysis module determines a spike in the stream of problem reports and determines the attributes of the problem reports in the spike to define the major incident. The analysis module determines a scope of the major incident and a corresponding attribute, to determine a blast radius associated with the major incident in the desktop system.Type: ApplicationFiled: July 8, 2024Publication date: October 31, 2024Inventors: Anushree Kunal Pole, Amitabh Bhuvangyan Sinha, Jimmy Chang, David T. Sulcer