Patents Issued in April 18, 2024
-
Publication number: 20240126585Abstract: An information handling system includes a basic input/output system (BIOS), and multiple virtual machines including first and second virtual machines. The first virtual machine communicates with the BIOS and other hardware components within the information handling system. The second virtual machine is configured in a BIOS update configuration. The first virtual machine receives a hypercall from the second virtual machine. The hypercall includes a command having a command type. The first virtual machine determines whether the command type within the hypercall matches a cloud policy assigned to the second virtual machine. In response to the command type matching the cloud policy, the first virtual machine provides the command to a proper hardware component within the information handling system.Type: ApplicationFiled: October 14, 2022Publication date: April 18, 2024Inventors: Ankit Singh, Sumanth Vidyadhara, Shrikant Hallur
-
Publication number: 20240126586Abstract: A computer implemented method includes receiving first firmware information at a hosting environment identifying that a user has selected user-controlled firmware for user virtual machines to be hosted on the hosting environment. A copy of the user-controlled firmware is obtained and a user virtual machine is deployed that includes the user-controlled firmware. The user-controlled firmware is locked against changes by the hosting environment absent receiving permission from the user.Type: ApplicationFiled: October 18, 2022Publication date: April 18, 2024Inventors: Gangadhara Swamy SHIVAGANGA NAGARAJU, Pushkar Vijay Chitnis, Bo Zhang, Amar Nad Rudra Gowda
-
Publication number: 20240126587Abstract: Examples relate to an apparatus, a device, a method, a computer program (or computer-readable medium) and computer system for determining presence of a noisy neighbor virtual machine. Some aspects of the present disclosure relate to an apparatus for a computer system, the apparatus comprising interface circuitry, machine-readable instructions, and processor circuitry to execute the machine-readable instructions to obtain performance information of one or more hardware performance measurement components of the computer system, determine, based on the performance information, a deviation of a utilization of the computer system from an expected utilization of the computer system, and determine presence of a first virtual machine having a workload that impacts a performance of one or more second virtual machines based on the deviation.Type: ApplicationFiled: December 22, 2023Publication date: April 18, 2024Inventors: Mona MINAKSHI, Shamima NAJNIN, Rajesh POORNACHANDRAN
-
Publication number: 20240126588Abstract: Systems and methods for configuring a virtual machine provided by a remote computing system based on the availability of one or more remote computing resources and respective corresponding prices of the one or more remote computing resources are disclosed. Users are presented with an interface that allows for selection of individual remote computing resources to be included in a custom-configured virtual machine. Also, a customized corresponding price is determined for the custom-configured virtual machine based on user selections and current availability of the selected remote computing resources to be included in the custom-configured virtual machine.Type: ApplicationFiled: September 22, 2023Publication date: April 18, 2024Applicant: Amazon Technologies, Inc.Inventor: Rajan Panchapakesan
-
Publication number: 20240126589Abstract: For hash token selection a cumulative balance placement algorithm may take a list of new nodes to be added and allocate new virtual nodes to a token range to ensure that when adding M new nodes, the distance between two virtual nodes for the same new node will be at least M?1 virtual nodes. This node balancing improves the operation of the system as a whole by more efficient utilization of each node.Type: ApplicationFiled: October 12, 2023Publication date: April 18, 2024Inventors: Bharatendra Boddu, Vinodh Sankaravadivel, Hao Qin
-
Publication number: 20240126590Abstract: Techniques are described for providing a multi-cloud control plane (MCCP) in a first cloud infrastructure (included in a first cloud environment provided by a first cloud services provider) that enables services and/or resources provided in the first cloud infrastructure to be utilized by users of a second cloud environment, where the second cloud environment is different than the first cloud environment. The multi-cloud infrastructure enables a user associated with an account with a second cloud services provider to use, from the second cloud infrastructure, a first service from the set of one or more cloud services. The multi-cloud infrastructure creates a link between the account with the second cloud service provider and a tenancy created in the first cloud infrastructure for enabling using the first service by the user.Type: ApplicationFiled: October 13, 2023Publication date: April 18, 2024Applicant: Oracle International CorporationInventors: Mostafa Gaber Mohammed Ead, Shobhank Sharma, Norka Beatriz Lucena Mogollon
-
Publication number: 20240126591Abstract: Techniques are described for providing a multi-cloud control plane (MCCP) in a first cloud infrastructure (included in a first cloud environment provided by a first cloud services provider) that enables services and/or resources provided in the first cloud infrastructure to be utilized by users of a second cloud environment, where the second cloud environment is different than the first cloud environment. The multi-cloud infrastructure enables a user associated with an account with a second cloud services provider to use, from the second cloud infrastructure, a first service from the set of one or more cloud services. The multi-cloud infrastructure creates a link between the account with the second cloud service provider and a tenancy created in the first cloud infrastructure for enabling using the first service by the user.Type: ApplicationFiled: October 13, 2023Publication date: April 18, 2024Applicant: Oracle International CorporationInventors: Mostafa Gaber Mohammed Ead, Shobhank Sharma, Satya Swaroop Yadalam, Norka Beatriz Lucena Mogollon, Ghazanfar Ahmed
-
Publication number: 20240126592Abstract: Methods for executing a user input in an IoT environment by at least one IoT device. The method may include receiving a user input from a user of the IoT device to execute at least one task associated with the IoT device. The method may include determining a multimodal context of the IoT environment relevant to the at least one task associated with the IoT device based on the received user input. The method may include retrieving multimodal data of the IoT environment corresponding to the determined multimodal context. The method may include determining a task execution intensity for the task associated with the IoT device based on the retrieved multimodal data. The method may include executing the task associated with the at least one IoT device using the determined task execution intensity.Type: ApplicationFiled: September 15, 2023Publication date: April 18, 2024Inventors: Saksham GOYAL, Sourabh TIWARI, Vinay Vasanth PATAGE
-
Publication number: 20240126593Abstract: The present disclosure relates to user-mode interrupt request processing methods and apparatuses. In one example method, a central processing unit (CPU) in a kernel mode runs a second interrupt exception handler that does not include a kernel address to determine a user-mode interrupt handler corresponding to a user-mode interrupt request, switches to a user mode by using a first privilege level without context recovery, further runs the user-mode interrupt handler in the user mode, and then switches to the kernel mode by using a second privilege level without context storage.Type: ApplicationFiled: December 27, 2023Publication date: April 18, 2024Inventors: Yuming WU, Shen CAO, Yutao LIU
-
Publication number: 20240126594Abstract: A computer includes a processor and a memory, and the memory stores instructions executable by the processor to execute an application in an isolated software environment on a controller of a vehicle; upon receiving a command from the application to actuate a component of the vehicle, prevent the command from being transmitted to the component; and upon receiving the command, transmit the command to a location in the memory.Type: ApplicationFiled: October 13, 2022Publication date: April 18, 2024Applicant: Ford Global Technologies, LLCInventors: Abdullah Ali Husain, Anushree Nagvekar, Srujan Reddy Maram, Nasser Shuaibi, Vyacheslav Zavadsky, Satish Rayarapu, Mirela Ioana Fonoage
-
Publication number: 20240126595Abstract: A method for managing a queue including a plurality of buffers is provided and includes the following steps: linking at least one of second buffers after each of a plurality of first buffers to form a plurality of sub-queues in a linked-list manner, wherein the buffers include the plurality of first buffers and the plurality of second buffers; linking the first buffers of the sub-queues to form the queue in the linked-list manner; separating a plurality of separated buffers, including a first one of the first buffers and the second buffers linked thereafter, from the queue by breaking a link between the first one of the first buffers and a second one of the first buffers; and releasing the separated buffers. An apparatus for managing the queue and a queue management device are also provided.Type: ApplicationFiled: October 13, 2022Publication date: April 18, 2024Inventor: Te-Lung Huang
-
Publication number: 20240126596Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for scheduling operations represented on a computation graph. One of the methods receiving, by a computation graph system, a request to generate a schedule for processing a computation graph, obtaining data representing the computation graph generating a separator of the computation graph; and generating the schedule to perform the operations represented in the computation graph, wherein generating the schedule comprises: initializing the schedule with zero nodes; for each node in the separator: determining whether the node has any predecessor nodes in the computation graph, when the node has any predecessor nodes, adding the predecessor nodes to the schedule, and adding the node in the schedule, and adding to the schedule each node in each subgraph that is not a predecessor to any node in the separator on the computation graph.Type: ApplicationFiled: July 18, 2023Publication date: April 18, 2024Inventors: Erik Nathan Vee, Manish Deepak Purohit, Joshua Ruizhi Wang, Shanmugasundaram Ravikumar, Zoya Svitkina
-
Publication number: 20240126597Abstract: The present application discloses a task scheduling method based on an improved particle swarm optimization algorithm, which includes: obtaining task data to be scheduled, encoding particles according to the task data; iterating the particles by a particle swarm optimization algorithm; in response to that the particle swarm optimization algorithm does not fall into a local optimal solution, outputting a scheduling scheme; and in response to that the particle swarm optimization algorithm falls into the local optimal solution, outputting the scheduling scheme by fusing the particle swarm optimization algorithm with a cuckoo search algorithm. The present application introduces a cuckoo search algorithm when the particle swarm optimization algorithm falls into a local optimal solution, solving the dilemma of the particle swarm optimization algorithm falling into a local optimal solution, while improving the global search capability of the algorithm.Type: ApplicationFiled: May 19, 2023Publication date: April 18, 2024Applicant: NANJING UNIVERSITY OF POSTS AND TELECOMMUNICATIONSInventors: Dengyin ZHANG, Maomao JI, Ying ZHAO
-
Publication number: 20240126598Abstract: A system includes a first device and a second device. The first device is configured to generate a target processing task based on a target object, generate a first identification code corresponding to the target object based on the target processing task, and display the first identification code; the second device is configured to obtain the first identification code through scanning, generate a task execution instruction corresponding to the target processing task based on the first identification code when first verification on the first identification code succeeds, establish a Bluetooth connection channel with the first device, and send the task execution instruction to the first device based on the Bluetooth connection channel; and the first device is further configured to receive the task execution instruction, and process the target processing task based on the task execution instruction when second verification on the task execution instruction succeeds.Type: ApplicationFiled: December 21, 2023Publication date: April 18, 2024Inventor: Wei YUAN
-
Publication number: 20240126599Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed to manage workloads for an operating system wherein it causes programmable circuitry to cause a task of a workload to be executed with a first processor core configuration; cause the task to be executed with a second processor core configuration; compare a first performance metric of the execution of the task with the first processor core configuration to a second performance metric of the execution with the second processor core configuration; and cause to be used one of the first processor core configuration or the second processor core configuration based on the comparison.Type: ApplicationFiled: December 26, 2023Publication date: April 18, 2024Inventors: Leslie Xu, Toby Opferman, David Bradley Sheffield, Mukta Singh
-
Publication number: 20240126600Abstract: This application provides a service process invoking method and a related apparatus. The method includes: An application process obtains a context of a binder process; the application process obtains a handle of a service process based on the context of the binder process; the application process runs a program of the binder process based on the context of the binder process, to obtain a context of the service process based on the handle of the service process; and the application process runs a program of the service process based on the context of the service process, to respond to a binder request of the application process, where the binder request is used to request a system service provided by the service process. In embodiments of this application, actual power consumption is accurately reflected.Type: ApplicationFiled: December 22, 2023Publication date: April 18, 2024Inventors: Yu PENG, Hongyang YANG, Xiaolong XIE
-
Publication number: 20240126601Abstract: A system for managing non-transient memory in a cloud computing environment, comprising a plurality of data processors configured to cooperatively provide a cloud computing environment, a persistent memory pool system configured to interact with each of the plurality of data processors to identify persistent non-transient data memory devices at each of the data processors and a plurality of memory pools created by the persistent memory pool system, wherein each of the plurality of memory pools has a designated function.Type: ApplicationFiled: October 17, 2022Publication date: April 18, 2024Applicant: DELL PRODUCTS L.P.Inventors: Vinod Parackal Saby, Parth Girishkumar Bera, Navdeeppal Singh, Krishnaprasad Koladi
-
Publication number: 20240126602Abstract: A processor to execute a plurality of tasks comprising a first task and a second task. At least a part of the first task is to be executed simultaneously with at least a part of the second task. The processor comprises a handling unit to: determine an available portion of a storage available during execution of the part of the first task; determine a mapping between at least one logical address associated with data associated with the part of the second task and a corresponding at least one physical address of the storage corresponding to the available portion; and identify, based on the mapping, the at least one physical address corresponding to the at least one logical address associated with the data, for storing the data in the available portion of the storage.Type: ApplicationFiled: October 17, 2022Publication date: April 18, 2024Inventors: Jens OLSON, John Wakefield BROTHERS, III
-
Publication number: 20240126603Abstract: The technology described herein is directed towards reducing resource-related messages in a distributed locking system in which exclusive locks can be granted. Requests for a resource lock or range thereof received during an interval are queued, along with lock release messages. The queue is processed after the interval to update the resource state, which can result in a reduction in messages. In one example, separate lock request messages received during an interval from the same requestor for two or more consecutive resource ranges are combined, whereby a single lock grant message for the combined resource ranges is sent instead of one for each request. In another example, if in an interval a lock request for a resource/range is received before a lock release, the lock is released before the lock request message is processed. This avoids sending a lock release request message to the previous owner.Type: ApplicationFiled: October 12, 2022Publication date: April 18, 2024Inventor: Gavin Greene
-
Publication number: 20240126604Abstract: A system provisioning resources of a processing unit. The system predicts a performance impact on a workload attributable to a performance constraint of the processing unit for the workload according to a resource model, wherein the workload includes a query and the resource model characterizes attainable compute bandwidth, attainable memory bandwidth, and arithmetic intensity based on peak compute bandwidth and peak memory bandwidth of the processing unit. The system determines a resource allocation of the processing unit, based on the predicted performance impact and instructs the processing unit to allocate the resources for processing the workload based on the determined resource allocation.Type: ApplicationFiled: January 30, 2023Publication date: April 18, 2024Inventors: Rathijit SEN, Matteo INTERLANDI, Jiashen CAO
-
Publication number: 20240126605Abstract: One example method includes defining experiences for a workload that are to be analyzed at a first machine-learning (ML) model. The experiences define an association between the workload and microservices having computing resources that execute the workload. A probability of using each of the microservices of the experiences to execute the workload is generated at a second ML mode. A determination is made of which of the experiences have a probability that indicates that the experience will generate a low reward when analyzed by the first ML model. The experiences that generate the low reward are removed from the experiences to be analyzed at the first ML model. The experiences that have not been removed are analyzed at the first ML model to determine which experience includes microservices that should be used to execute the workload.Type: ApplicationFiled: October 18, 2022Publication date: April 18, 2024Inventors: Yanexis Pupo Toledo, Micael Veríssimo de Araújo, Eduardo Vera Sousa
-
Publication number: 20240126606Abstract: Data that is to be processed by a particular service executed by a first edge computing device in an application, is analyzed to determine characteristics of the data. An opportunity to replicate the particular service on a plurality of edge computing devices is determined based on characteristics of the data. A second edge computing device is determined to be available to execute a replicated instance of the particular service. Replication of the particular service is initiated on a plurality of edge computing devices including the second edge computing device. An output of an instance of the particular service executed on the first edge computing device and an output of the replicated instance of the particular service executed on the second edge computing device are combined to form a single output for the particular service.Type: ApplicationFiled: December 27, 2023Publication date: April 18, 2024Inventors: Akhilesh Thyagaturu, Jonathan L. Kyle, Karthik Kumar, Francesc Guim Bernat, Mohit Kumar Garg
-
Publication number: 20240126607Abstract: Techniques are described herein for analyzing and tuning database workloads to optimize application performance. In some embodiments, a workload analyzer identifies a captured workload that includes a set of database queries executed within a particular timeframe. The workload analyzer compares the workload within one or more other workloads executed within a previous timeframe to determine differences between the different workloads. For example, the workload analyzer may identify changes in the distributions of queries, including how many queries are unchanged, missing, and/or new. The workload analyzer may further detect changes in the performance of individual queries. The workload analyzer may determine the overall performance impact of such changes on the total workload. Based on the analysis, the workload analyzer may generate reports, alerts, tuning advice, and/or recommendations to boost performance.Type: ApplicationFiled: May 11, 2023Publication date: April 18, 2024Applicant: Oracle International CorporationInventors: Gaylen Royal, Karen Michaels, Björn Bolltoft
-
Publication number: 20240126608Abstract: A validation system for executing a method for prioritizing validation tasks, wherein the validation tasks are carried out by execution units of the validation system, wherein the execution units are divided into at least two groups and each group is assigned capabilities by the validation system and/or by a user of the validation system, so that execution units of a respective group have the capabilities of the group and, when the validation tasks are executed automatically by the validation system and/or by the user, a requirement of the respective validation task for the capability of the execution units and a priority for execution are specified and, taking into account the priorities and the capable execution units, an execution sequence is determined and the validation task is executed according to the execution sequence by the capable execution units.Type: ApplicationFiled: October 16, 2023Publication date: April 18, 2024Applicant: dSPACE GmbHInventors: Thomas MISCH, Simon GORDON
-
Publication number: 20240126609Abstract: A configurable logic platform may include a physical interconnect for connecting to a processing system, first and second reconfigurable logic regions, a configuration port for applying configuration data to the first and second reconfigurable logic regions, and a reconfiguration logic function accessible via transactions of the physical interconnect, the reconfiguration logic function providing restricted access to the configuration port from the physical interconnect. The platform may include a first interface function providing an interface to the first reconfigurable logic region and a second interface function providing an interface to the first reconfigurable logic region. The first and second interface functions may allow information to be transmitted over the physical interconnect and prevent the respective reconfigurable logic region from directly accessing the physical interconnect.Type: ApplicationFiled: December 22, 2023Publication date: April 18, 2024Applicant: ThroughPuter, Inc.Inventor: Mark Henrik Sandstrom
-
Publication number: 20240126610Abstract: An apparatus and a method of processing data, an electronic device, and a storage medium are provided, which relate to a field of artificial intelligence, and in particular to fields of chip and multi-thread parallel technologies. The apparatus includes: a first target storage unit; and a processor configured to: determine an initial number of threads according to a data amount of target data and a capacity of the first target storage unit in response to determining that the data amount is less than or equal to the capacity of the first target storage unit, where the target data includes input data to be processed, weight data to be processed, and output data; and determine a first number of executable tasks according to the initial number of threads in response to determining that the initial number of threads is greater than or equal to a predetermined number of threads.Type: ApplicationFiled: November 28, 2023Publication date: April 18, 2024Applicant: Kunlunxin Technology (Beijing) Company LimitedInventors: Runze LI, Shiyu ZHU, Baoyu ZHOU
-
Publication number: 20240126611Abstract: The description relates to accelerator architectures for deep learning models. One example can obtain a deep learning training script associated with a deep learning model and extract an operator graph from the training script. The example can split the operator graph into first and second portions of a heterogeneous pipeline and tune a first accelerator core for the first portion of the heterogeneous pipeline and a second accelerator core for the second portion of the heterogeneous pipeline. The example can also generate a hardware architecture that includes the first accelerator core and the second accelerator core arranged to collectively accomplish the deep learning model.Type: ApplicationFiled: October 13, 2022Publication date: April 18, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Amar PHANISHAYEE, Divya MAHAJAN, Janardhan KULKARNI, Miguel CASTRO, Muhammad ADNAN
-
Publication number: 20240126612Abstract: A resource allocation device includes: a request receiving unit that receives a request to allocate a state management unit for sharing a state for each group composed of a plurality of user terminals within a group, to any of a plurality of DCs deployed in a distributed cloud environment; and an allocation calculation unit that calculates an allocation cost of allocating the state management unit to each DC by using an allocation index corresponding to a requirement requested by the group, and determines a DC to which the state management unit is allocated, in accordance with the allocation cost.Type: ApplicationFiled: February 12, 2021Publication date: April 18, 2024Inventors: Ryohei SATO, Yuichi NAKATANI
-
Publication number: 20240126613Abstract: A chip or other apparatus of an aspect includes a first accelerator and a second accelerator. The first accelerator has support for a chained accelerator operation. The first accelerator is to be controlled as part of the chained accelerator operation to access an input data from a source memory location in system memory, process the input data, and generate first intermediate data. The second accelerator also has support for the chained accelerator operation. The second accelerator is to be controlled as part of the chained accelerator operation to receive the first intermediate data, without the first intermediate data having been sent to the system memory, process the first intermediate data, and generate additional data. Other apparatus, methods, systems, and machine-readable medium are disclosed.Type: ApplicationFiled: October 17, 2022Publication date: April 18, 2024Inventors: Saurabh GAYEN, Christopher J. HUGHES, Utkarsh Y. KAKAIYA, Alexander F. HEINECKE
-
Publication number: 20240126614Abstract: Examples described herein provide a computer-implemented method that includes, in response to receiving a request against the workload in an environment comprising predetermined cloud-based containers, searching predetermined container runtime interface metadata across a plurality of compute nodes in the environment to locate runtime processes. The method further includes selecting, for each runtime process located, a respective applicable profiler from a set of predetermined profilers sharing a transactional database. The method further includes injecting, for each runtime process located, predetermined code libraries for each respective applicable profiler. The method further includes re-linking the predetermined code libraries for each respective applicable profiler. The method further includes executing, for each runtime process located, each respective applicable profiler to produce a set of results.Type: ApplicationFiled: October 14, 2022Publication date: April 18, 2024Inventors: Doga Tav, Matthew de Souza, Alpha Barry, Geoffrey Tate, Nick Antonov
-
Publication number: 20240126615Abstract: Embodiments for orchestrating execution of workloads on a distributed computing infrastructure are disclosed herein. In one example, environment data is received for compute devices in a distributed computing infrastructure. The environment data is indicative of an operating environment of the respective compute devices and a physical environment of the respective locations of the compute devices. Future operating conditions of the compute devices are predicted based on the environment data, and workloads are orchestrated for execution on the distributed computing infrastructure based on the predicted future operating conditions.Type: ApplicationFiled: December 13, 2023Publication date: April 18, 2024Applicant: Intel CorporationInventors: Sundar Nadathur, Akhilesh Thyagaturu, Jonathan L. Kyle, Scott M. Baker, Woojoong Kim
-
Publication number: 20240126616Abstract: A computation processing device includes: a convolutional computation unit that sequentially outputs convolutional computation result data; a pooling processing unit including a pooling computation circuit and a non-volatile storage circuit for pooling, in which the non-volatile storage circuit for pooling retains the convolutional computation result data or a computation result of the pooling computation circuit, as retained data, and the pooling computation circuit calculates and outputs pooling data subjected to pooling processing to a pooling region by using the retained data each time when the convolutional computation result data is input from the convolutional computation unit; and a power gating unit that blocks power supply to the non-volatile storage circuit for pooling while waiting for the input of the convolutional computation result data from the convolutional computation unit.Type: ApplicationFiled: June 15, 2022Publication date: April 18, 2024Inventors: Osamu NOMURA, Tetsuo ENDOH, Yitao MA, Ko YOSHIKAWA
-
Publication number: 20240126617Abstract: Embodiments of the present disclosure include techniques for machine language processing. In one embodiment, the present disclosure includes configuring functional modules on a machine learning processor to execute a plurality of machine learning (ML) operations during a plurality of time segments. During the time segments, a first portion of the ML operations execute serially and at least one other ML operation executes during at least a majority of the time of each of the time segments. Serial ML operations may be processed simultaneously with the at least one other ML operation.Type: ApplicationFiled: October 14, 2022Publication date: April 18, 2024Inventors: Haishan ZHU, Preyas Janak SHAH, Tiyasa MITRA, Eric S. CHUNG
-
Publication number: 20240126618Abstract: A computer-implemented method, computer program product and computing system for: defining a migration pathway for a current migration project, wherein the migration pathway includes one or more migration portions; and assigning a complexity score to each of the one or more migration portions, thus defining one or more complexity scores.Type: ApplicationFiled: September 29, 2023Publication date: April 18, 2024Inventor: James W. Garrett
-
Publication number: 20240126619Abstract: Managing computing workloads within a computing environment including identifying computing parameters of datacenter elements of each computing cluster of a computing environment; for each computing cluster of the computing environment: determining a health of the power device of the computing cluster; for each computing node of the computing cluster: determining a processing load of the computing node; determining a computing cost associated with a geo-location of the computing node; calculating, for each computing cluster, an availability of computing resources of the computing cluster based on the computing parameters of the data center elements of the computing cluster, the health of the power device of the computing cluster, the processing load of each computing node of the computing cluster, and the computing cost of each computing node of the computing cluster; generating a ranking of each computing cluster based on the availability of the computing resources of the computing cluster.Type: ApplicationFiled: October 12, 2022Publication date: April 18, 2024Inventors: RISHI MUKHERJEE, RAVISHANKAR N. KANAKAPURA, PRASOON KUMAR SINHA, RAVEENDRA BABU MADALA
-
Publication number: 20240126620Abstract: Systems and methods are provided for automatically filtering privileged methods from unprivileged methods, and thus preventing privileged methods from being available to an unelevated consumer application executing on an information handling system. Filtering privileged methods from unprivileged methods may be performed, for example, by identifying any unprivileged method/s within an original implementation class of an elevated publisher software application that are eligible to be exposed to (e.g., shared with) an unelevated consumer software application via a named pipe, and implementing a corresponding dynamic publisher object on the elevated publisher software application and an intermediary dynamic consumer proxy class on the unelevated consumer software application to prevent the unelevated consumer software application from calling any other methods (e.g.Type: ApplicationFiled: October 13, 2022Publication date: April 18, 2024Inventors: Daniel Thomas Daugherty, Ricardo Antonio Ruiz
-
Publication number: 20240126621Abstract: A dashboard runtime component includes (1) a visualization component configured to render a visual representation of data items retrieved from a data source and (2) a query execution component associated with at least the visualization component. The query execution component is configured to retrieve the data items from the data source.Type: ApplicationFiled: December 20, 2023Publication date: April 18, 2024Inventors: Skip SAULS, Medha SRIVASTAVA, Edward MENGEL, Sameer SETHI, James DIEFENDERFER
-
Publication number: 20240126622Abstract: A set of threads of an application are identified to be executed on a platform, where the platform comprises a multi-node architecture. A set of queues of an I/O device of the platform are reserved and associated with one of a plurality of nodes in the multi-node architecture. Data is received at the I/O device, where the I/O device is included in a particular one of the plurality of nodes. Response data is generated through execution of a thread in the set of threads using a processing core and memory of the particular node, and the response data is caused to be sent on the I/O device based on inclusion of the I/O device in the particular node.Type: ApplicationFiled: December 27, 2023Publication date: April 18, 2024Inventors: Anil Vasudevan, Sridhar Samudrala, Tushar S. Gohad, Nash A. Kleppan, Stefan T. Peters
-
Publication number: 20240126623Abstract: Methods, systems, and computer-readable media for tracing service interactions without global transaction identifiers are disclosed. A service monitoring system receives an event message from a first service in a service-oriented system. The event message comprises one or more elements of data from a body of a service request from an upstream service. The first service initiates a sub-task associated with the service request. The service monitoring system receives one or more additional event messages from one or more additional services. The additional event message(s) comprise one or more additional elements of data from one or more additional service requests associated with one or more additional sub-tasks. The service monitoring system determines, based (at least in part) on the element(s) of data in the event message and the additional element(s) of data in the additional event message(s), that the sub-task and the additional sub-task(s) are associated with a higher-level task.Type: ApplicationFiled: December 28, 2023Publication date: April 18, 2024Applicant: Amazon Technologies, Inc.Inventor: Felix Elliger
-
Publication number: 20240126624Abstract: Techniques for automatically generating an API in response to a natural language input are provided. A method includes: receiving, by a processor set, a natural language input provided via a conversational user interface of a client device; determining, by the processor set, requirements of a new application programming interface (API) by analyzing the natural language input using natural language understanding; and automatically generating, by the processor set, the new API based on the requirements.Type: ApplicationFiled: October 14, 2022Publication date: April 18, 2024Inventors: Laurentiu Gabriel Ghergu, Natalie Brooks Powell, Karthik Muthuraman, Marian I Tataru
-
Publication number: 20240126625Abstract: A request to configure a connect cluster including one or more connectors for a cloud computing system is received. The request include a desired connector state. A connector specification file is automatically generated based on the desired connector state for the connect cluster via a declarative application programming interface (API). Application resources associated with the connect cluster are automatically configured based on the specification file.Type: ApplicationFiled: October 12, 2022Publication date: April 18, 2024Inventors: Rajesh RC, Pei Yang, Andrew Ding, Rohit Bakhshi, Lokesh Shekar, Steven Costa
-
Publication number: 20240126626Abstract: The disclosure is directed to systems and techniques for executing a documentation application displaying a graphical user interface having a content-creation field configured to receive textual input. A link-creation window may be generated, which facilitates browsing third-party content without leaving a current application. Using the disclosed interface, a user can generate a selectable graphical object that links to third-party content from within the context of the content-creation field interface.Type: ApplicationFiled: December 20, 2023Publication date: April 18, 2024Inventors: Jonathan George Katahanas, Abhinav Kishore, Vijay Suresh Sutrave, James Rotanson, Tong Li
-
Publication number: 20240126627Abstract: Provided are a method and apparatus for obtaining information of a stack frame in a call stack, a device, and a medium. The method includes: obtaining to-be-processed call stack with an abnormality during running of a program, each area element of the to-be-processed call stack including a method pointer and corresponding instruction offset value; applying for a first memory area used to store the method pointer and the corresponding instruction offset value; applying for a second memory area, and storing an address of the first memory area into a first area element in the second memory area; applying for a third memory area, and storing an address of the second memory area into a target storage area in the third memory area; and obtaining information of each stack frame in the to-be-processed call stack based on the address of the second memory area stored in the target storage area.Type: ApplicationFiled: February 28, 2022Publication date: April 18, 2024Inventor: Hongkai LIU
-
Publication number: 20240126628Abstract: When a CRAM error detection unit (12c) detects a one-bit soft error of a CRAM (12b), a failure management unit (16) performs control for coping with a failure in a downstream side communication device (14) with an error notification (ER1) as a trigger. A failure detection sensitivity of a failure detection unit (15) is temporarily increased compared to a steady state. Or an upstream side communication device (11) retransmits an original communication signal (SG1). Or a known test signal is transmitted from the upstream side communication device (11) before retransmission, and the failure is diagnosed on the downstream side, and then the test signal is retransmitted. Alternatively, a signal is processed by two communication paths to select a signal without an error, and an erroneous signal is discarded. When a failure is detected, a device is restarted.Type: ApplicationFiled: February 16, 2021Publication date: April 18, 2024Inventor: Mizuki TATENO
-
Publication number: 20240126629Abstract: A semiconductor device includes a first clock; a second clock; a first baud rate generator generating the basic clock by using the first clock; a second baud rate generator generating the basic clock by using the second clock; and a control circuit correcting the first baud rate generator. The control circuit includes: a correction operation signal output circuit outputting a correction operation signal on the basis of the second clock of the second baud rate generator; and a correction value setting circuit outputting a correction value setting signal on the basis of the correction operation signal. The second baud rate generator counts a correction period in accordance with the correction operation signal by using the first clock on the basis of the correction value setting signal, and sets a baud rate correction value on the basis of a count result.Type: ApplicationFiled: September 15, 2023Publication date: April 18, 2024Inventor: Zheng GONG
-
Publication number: 20240126630Abstract: An embodiment includes detecting a set of anomalies recorded during a first predefined window of time in log entries for a computer environment. The embodiment also includes generating cluster data representative of a cluster of anomalies from among the set of anomalies, where the cluster is formed using a lattice clustering algorithm that spatially distinguishes the cluster of anomalies from other anomalies in the set of anomalies. The embodiment also includes composing an explanation using log templates generated from log entries associated with the cluster of anomalies.Type: ApplicationFiled: October 12, 2022Publication date: April 18, 2024Applicant: International Business Machines CorporationInventors: Seema Nagar, Mudhakar Srivatsa, Pooja Aggarwal, Joshua M Rosenkranz, Dipanwita Guhathakurta, Amitkumar Manoharrao Paradkar, Rohan R. Arora
-
Publication number: 20240126631Abstract: Systems and methods for generating an enhanced error message are provided. An example method includes: receiving one or more raw error messages. The one or more raw error messages include one or more stack traces. The method further includes matching at least one raw error message of the one or more raw error messages to one or more error rules from a plurality of error rules. The one or more error rules include regular expression patterns. The method further includes parsing the at least one raw error message, based on the one or more matched error rules from the plurality of error rules; and generating one or more enhanced error messages, based on the at least one parsed raw error messages. The one or more enhanced error messages include one or more natural language sentences. The method further includes embedding the one or more enhanced error messages into a website.Type: ApplicationFiled: October 11, 2023Publication date: April 18, 2024Inventors: Timothy Tamm, Richard Niemi, Ivan Charbonneau, Kevin Lynch, Shelby Vanhooser
-
Publication number: 20240126632Abstract: Systems and methods for automated remediation of issues arising in a data management storage system are provided. Deployed assets of a storage solution vendor may deliver telemetry data to the vendor on a regular basis. The received telemetry data may be processed by an AIOps platform to perform predictive analytics and arrive at “community wisdom” from the vendor's installed user base. In one embodiment, an insight-based approach is used to facilitate risk detection and remediation including proactively addressing issues before they turn into more serious problems. For example, based on continuous learning based on the community wisdom and making one or both of a rule set and a remediation set derived therefrom available for use by cognitive computing co-located with a customer's storage system, a risk to which the storage system is exposed may be determined and a corresponding remediation may be deployed to address or mitigate the risk.Type: ApplicationFiled: December 21, 2023Publication date: April 18, 2024Applicant: NetApp, Inc.Inventors: Nibu Habel, Jeffrey Scott MacFarland, John Richard Swift
-
Publication number: 20240126633Abstract: A method for responding to a command is adapted for a storage device. The method for responding to a command includes following steps of: sequentially receiving a first command and a second command by a bridge of the storage device from a host; executing the first command and the second command to generate a status completion signal or a status error signal by the bridge; and detecting an error state of at least one of the first command and the second command to execute a response mode or an idle mode by the bridge according to the error state so as to respond to the host.Type: ApplicationFiled: August 14, 2023Publication date: April 18, 2024Inventors: Yi Cheng TSAI, Sung-Kao LIU, Cheng-Yuan HSIAO, Po-Hao CHEN
-
Publication number: 20240126634Abstract: A method in an illustrative embodiment of the present disclosure includes determining, utilizing a first diagnosis model deployed in a storage system, whether a cause of a fault belongs to environmental factors. The method further includes determining, responsive to determining that the cause of the fault belongs to the environmental factors, whether the fault can be solved locally in the storage system. The method further includes sending, responsive to determining that the fault cannot be solved locally in the storage system, the fault to a second diagnosis model, wherein the first diagnosis model is obtained by distilling the second diagnosis model. According to the method for fault diagnosis of the present disclosure, particular faults can be diagnosed and solved locally in a storage system, so that the workload of a customer support team of the storage system in a cloud can be reduced.Type: ApplicationFiled: November 21, 2022Publication date: April 18, 2024Inventors: Jiacheng Ni, Jinpeng Liu, Zijia Wang, Zhen Jia