Patents Issued in September 12, 2024
  • Publication number: 20240303108
    Abstract: A method for assignment and prioritization of tasks for satisfying deadlines in decentralized execution of tasks is provided. The method includes: receiving inputs that relate to a set of tasks, a set of agents, a set of goals, a set of priority levels that are assignable to each task, and a partial order plan that relates to ordering dependencies for performing and completing the tasks; determining a qualification function that relates to whether a particular task is performable by a particular agent; determining an availability function that relates to a respective availability of each agent during a particular time interval; and analyzing the partial order plan, the qualification function, and the availability function in order to obtain an assignment function that relates to a proposed set of assignments of tasks to agents and a prioritization function that relates to a proposed set of assignments of tasks to priority levels.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 12, 2024
    Applicant: JPMorgan Chase Bank, N.A.
    Inventors: Sriram GOPALAKRISHNAN, Daniel BORRAJO
  • Publication number: 20240303109
    Abstract: Techniques are disclosed relating to a computer system identifying usage of a plurality of individual instances of a common computation task by a plurality of users of a networked service. These individual instances of the common computation task may generate a respective data set. Techniques also include creating, by the computer system, a global process to perform the common computation task. Execution of the global process may include generation of a global data set that includes at least portions of the respective data sets. Additionally, techniques include modifying, by the computer system, respective accounts of a subset of the plurality of users to use the global process in place of using a respective instance of the common computation task, as well as providing, by the computer system, the global data set generated by the global process to the respective accounts of the subset of users.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 12, 2024
    Inventors: Prabin Patodia, Rajendra Bhat
  • Publication number: 20240303110
    Abstract: The present disclosure relates to the method and system for identifying a task sequence from interaction stream. Method includes receiving interaction stream related to one or more interactions of user with computing system, one or more events that occurred from one or more interactions. The processed interaction stream is transformed into n-grams. Thereafter, a plurality of potential data candidates is identified for each of n-grams by interpreting corresponding start markers and end markers. Thereafter, method includes transforming each of identified plurality of potential data candidates into corresponding potential data candidate vector, and determining similarity score for each pair of plurality of potential data candidates by comparing each of plurality of potential data candidate vectors of corresponding pair of the plurality of potential data candidates.
    Type: Application
    Filed: July 24, 2023
    Publication date: September 12, 2024
    Applicant: EdgeVerve Systems Limited
    Inventors: ARCHANA YADAV, Amrutha BAILURI
  • Publication number: 20240303111
    Abstract: Disclosed herein is an apparatus and method for offloading parallel computation tasks. The apparatus inserts requests to execute multiple parallel thread groups into at least one parallel thread group queues, wherein when a preset order of priority exists the requests to execute is inserted into the at least one parallel thread group queues according to the preset order of priority, executes parallel threads of the parallel thread groups using a parallel thread group execution request entry extracted from the parallel thread group queue according to the order of priority, inserts an execution result into an execution result queue when execution of the parallel threads according to an execution sequence scheduled in execution startup routine code is terminated, and checks the execution termination state of the parallel thread groups by checking the execution result queue.
    Type: Application
    Filed: January 18, 2024
    Publication date: September 12, 2024
    Inventors: Shin-Young AHN, Young-Ho KIM, Eun-Ji LIM, Woo-Jong HAN, Yoo-Mi PARK, Sung-Ik JUN
  • Publication number: 20240303112
    Abstract: A computational storage device includes a storage device and a computation control circuit. The computation control circuit includes multi-core processor and is configured to generate an input/output (I/O) task according to an I/O command, generate a background task according to the I/O command, select an idle core among a plurality of cores in the multi-core processor to perform the background task, and control the storage device. The computation control circuit may include a task control module configured to select the idle core.
    Type: Application
    Filed: August 15, 2023
    Publication date: September 12, 2024
    Inventors: Yeohyeon PARK, Seungjin LEE, Changgyu LEE, Youngjae KIM, Inhyuk PARK, Soonyeal YANG, Woo Suk CHUNG
  • Publication number: 20240303113
    Abstract: Embodiments herein describe a pull-based model to dispatch tasks in an accelerator device. That is, rather than a push-based model where a connected host pushes tasks into hardware (HW) queues in the accelerator device, the embodiments herein describe a pull-based model where a command processor (CP) loads tasks into the HW queues after any data dependencies have been resolved.
    Type: Application
    Filed: March 8, 2023
    Publication date: September 12, 2024
    Inventors: Anthony GUTIERREZ, Paul BLINZER, Samuel BAYLISS, Stephen Alexander ZEKANY, Ali Arda EKER
  • Publication number: 20240303114
    Abstract: Systems, methods, and data storage devices for dynamic allocation of capacity to namespaces are described. A data storage device may support multiple host connections to multiple namespaces allocated in its non-volatile storage medium according to a storage protocol, such as non-volatile memory express (NVMe). Each namespace may initially be allocated with an allocated capacity. For at least some of the namespaces, a portion of the allocated capacity may be allocated to a floating namespace pool. When the fill mark for one of the namespaces reaches a flexible capacity threshold, capacity from the floating namespace pool may be dynamically allocated to that namespace and removed from the floating namespace pool.
    Type: Application
    Filed: July 20, 2023
    Publication date: September 12, 2024
    Inventors: Sridhar Sabesan, Dinesh Babu, Pavan Gururaj
  • Publication number: 20240303115
    Abstract: The present technology relates to an information processing apparatus and a program that enable effective use of a memory area in which implementation data necessary for implementation of a service is stored in an allocation area allocated to each service. A process is performed in which an allocation area that is placed in a state in which at least a part of implementation data necessary for implementation of each of one or more services is deleted and remains being allocated to a predetermined service, among allocation areas respectively allocated to the one or more services in a memory area in which the implementation data is stored, is changed to a service-undetermined allocation area not allocated to any of the services, and thereafter a service issuance process of storing the implementation data corresponding to a new service in the service-undetermined allocation area is performed.
    Type: Application
    Filed: January 13, 2022
    Publication date: September 12, 2024
    Inventor: JUNJI GOTO
  • Publication number: 20240303116
    Abstract: Provided are a processing system and method for increasing memory resources. The method includes generating, by a host node, a device memory resource request and transmitting the device memory resource request to a network manager, providing, by the network manager, memory node information and connection information to the host node in response to the memory resource request, generating, by the host node, an optical link frame corresponding to the request, connecting, by the network manager, a memory node whose memory resources are available and the host node by controlling an optical switch, and communicating, by the host node and the memory node of which memory resources are available, with each other using a light signal corresponding to the optical link frame.
    Type: Application
    Filed: March 11, 2024
    Publication date: September 12, 2024
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Ji Wook YOUN, Daeub KIM, Bup Joong KIM, Chanho PARK, Jongtae SONG, Joon Ki LEE, Kyeong-Eun HAN
  • Publication number: 20240303117
    Abstract: Operations of a workload are assigned to physical resources of a physical device array. The workload includes a graph of operations to be performed on a physical device array. The graph of operations is partitioned into subgraphs. Partitioning includes at least minimizing the quantity of subgraphs and maximizing resource utilization per subgraph. A logical mapping of the subgraph to logical processing engine (PE) units is generated using features of the subgraph and tiling factors of the logical PE units. The logical mapping is assigned to physical PE units of the physical device array at least by minimizing network traffic across the physical PE units. The operations of the subgraph are performed using the physical PE units to which the logical mapping is assigned. This process enhances the computational efficiency of the array when executing the workload.
    Type: Application
    Filed: February 24, 2023
    Publication date: September 12, 2024
    Inventors: Fanny NINA PARAVECINO, Michael Eric DAVIES, Abhishek Dilip KULKARNI, Md Aamir RAIHAN, Ankit MORE, Aayush ANKIT, Torsten HOEFLER, Douglas Christopher BURGER
  • Publication number: 20240303118
    Abstract: Various embodiments include methods and systems for identifying vehicle computing resources with capacity that can be offered to support remote computing systems with Edge computing services. Various embodiments may include identifying an operating state of a vehicle computing system not responsible for vehicle navigation or maneuvering, selecting an available computing capacity of the computing system based on the identified operating state, and offering the available computing capacity of the computing system to perform a computing task received from a network computing device. In some embodiments, selecting an available computing capacity of the computing system may include identifying a predetermined available computing capacity of the computing system corresponding to the identified operating state in a data table stored in memory.
    Type: Application
    Filed: March 8, 2023
    Publication date: September 12, 2024
    Inventors: Kapil GULATI, Agrim BARI, Hong CHENG, Shailesh PATIL, Gene Wesley MARSH
  • Publication number: 20240303119
    Abstract: Automatic process generation and recommendation can include extracting, in real time, features from user input to a computer. The features extracted can be compared with recorded features corresponding to a prior behavior. A user-intended action can be predicted in response to a match between the features extracted and the features corresponding to the prior behavior. A sequence of processor-executable actions corresponding to the prior behavior can be generated.
    Type: Application
    Filed: March 8, 2023
    Publication date: September 12, 2024
    Inventors: Xiao Xuan Fu, Jiang Yi Liu, Wen Qi WQ Ye, Si Yu Chen, Min Cheng
  • Publication number: 20240303120
    Abstract: This invention pertains to optimizing data-analysis in distributed computing over a LAN or WAN of compute nodes. The method disclosed applies to processes that can be partitioned into tasks amenable to embarrassingly parallel compute. Reversing the traditional master-slave operation, this method introduces node-initiated task handling by synapses: scripts in daemon mode initiating requests for tasks specified by instructions in line items from a shared process list subject to atomic updating. This method realizes dynamical load balancing to compute-limited performance in heterogeneous distributed computing, when tasks have compute demands that are not predictable or nodes vary in compute performance. A particular objective is high-throughput signal-processing in time-critical processes, common in engineering and multi-messenger astronomy.
    Type: Application
    Filed: March 12, 2023
    Publication date: September 12, 2024
    Inventor: Maurice Hendrikus Paulus van Putten
  • Publication number: 20240303121
    Abstract: Managing the resource demand load for edge systems is significantly more complex than for other systems, such as cloud environments. A time period in which an application or task is operating based on initial demand resource load values that are provided by a customer may be inaccurate, which may expose sub-standard execution. Embodiments herein seek to significantly mitigate the potential of sub-standard execution. Embodiments collect a repository of resource demand load usage data over a time period that can be used to accurately determine the statistical moments of uncertain resource demand load. In one or more embodiments, a repository of hypervector and/or hyperspace representations may be generated and used to help with resource demand load estimation.
    Type: Application
    Filed: August 7, 2023
    Publication date: September 12, 2024
    Applicant: DELL PRODUCTS L.P.
    Inventors: William Jeffery WHITE, Said TABET
  • Publication number: 20240303122
    Abstract: Provided is an apparatus for accelerating graph neural network (GNN) pre-processing, the apparatus including a set-partitioning accelerator configured to sort each edge of an original graph stored in a coordinate list (COO) format by a node number, perform radix sorting based on a vertex identification (VID) to generate a COO array of a preset length, and perform uniform random sampling on some nodes of a given node array, a merger configured to merge the COO array of the preset length to generate one sorted COO array, a re-indexer configured to assign new consecutive VIDs respectively to the nodes selected through the uniform random sampling, and a compressed sparse row (CSR) converter configured to the edges sorted by the node number into a CSR format.
    Type: Application
    Filed: August 22, 2023
    Publication date: September 12, 2024
    Applicant: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOG
    Inventors: Myoungsoo JUNG, Seungkwan Kang, Donghyun Gouk, Miryeong Kwon, Hyunkyu CHOI, Junhyeok Jang
  • Publication number: 20240303123
    Abstract: Computer workloads can be managed across distributed computing clusters according to some aspects of the present disclosure. In one example, a system can receive a request from a workload manager for identifying a computing cluster to which to assign a workload. The system can determine that the workload is to be assigned to a particular computing cluster among a plurality of computing clusters based on historical information about replica deployment by the particular computing cluster. The system can then transmit a response to the workload manager for causing the workload manager to assign the workload to the particular computing cluster.
    Type: Application
    Filed: May 21, 2024
    Publication date: September 12, 2024
    Inventors: Huamin Chen, Ricardo Noriega De Soto
  • Publication number: 20240303124
    Abstract: Presented herein are embodiments to implement a temporal queueing system with class-based fair queuing and dynamic resource allocation based on a novel look-ahead capability to manage various models and workloads for utilization/efficiency improvements. Embodiments may be implemented to allocate accelerator resources based on platform-defined timeslots, and therefore significantly increase the ability of workloads to access hardware accelerator resources. Training and inference may be supported with flexible preemption and the ability to support run-to-completion for training tasks while still supporting non-run-to-completion for inference tasks. Embodiments may be implemented by an edge software operation platform through virtual accelerators to allow emulation of different types of hardware accelerators and to map to the hardware accelerators with hardware-specific procedures managed by an edge orchestrator and an edge endpoint.
    Type: Application
    Filed: July 19, 2023
    Publication date: September 12, 2024
    Applicant: DELL PRODUCTS L.P.
    Inventors: William Jeffery WHITE, Said TABET
  • Publication number: 20240303125
    Abstract: Provided is a method for creating an operation call list for artificial intelligence calculation, which is performed by one or more processors, and includes acquiring a trace from a source program including an artificial intelligence calculation, wherein the trace includes at least one of code or primitive operation associated with the source program, and creating a call list including a plurality of primitive operations based on the trace, in which the plurality of primitive operations may be included in an operation library accessible to each of the plurality of accelerators.
    Type: Application
    Filed: January 22, 2024
    Publication date: September 12, 2024
    Inventors: Gangwon Jo, Jungho Park
  • Publication number: 20240303126
    Abstract: Request processing performance of a processing network can be increased using certain systems and methods. For example, a request processing platform can receive a plurality of operation requests generated by a load simulator. The load simulator may test a target load associated with the plurality of operation requests. Based on the target load, the request processing platform can adjust an allocation of computing resources in the request processing platform. Additionally, the request processing platform can adjust a configuration of an orchestration engine based on a predetermined throughput threshold associated with the target load. The request processing platform may establish a connection to an internal service platform that can process the plurality of operation requests.
    Type: Application
    Filed: March 6, 2023
    Publication date: September 12, 2024
    Inventors: Sanjeev Kumar Jha, Tekchand Prasad, Suresh Edupuganti
  • Publication number: 20240303127
    Abstract: Managing the resource demand load for edge systems is significantly more complex than for other systems, such as cloud environments. Unlike cloud systems and other frameworks that are able to use closed-form solutions based on Poisson processes or other tractable Gaussian-based probability distributions, edge systems present complex waveforms, pareto/alpha-stable distributions, and long-range dependence. Based on elaborately designed embodiments that recognize the complexities of edge data, one can estimate scaling and multi-fractal dimensionality to determine predictive models.
    Type: Application
    Filed: August 7, 2023
    Publication date: September 12, 2024
    Applicant: DELL PRODUCTS L.P.
    Inventors: William Jeffery WHITE, Said TABET
  • Publication number: 20240303128
    Abstract: Managing the resource demand load for edge systems is significantly more complex than for other systems, such as cloud environments. A time period in which an application or task is operating based on initial demand resource load values that are provided by a customer may be inaccurate, which may expose sub-standard execution. Embodiments herein seek to significantly mitigate the potential of sub-standard execution. Embodiments collect a repository of resource demand load usage data over a time period that can be used to accurately determine the statistical moments of uncertain resource demand load. In one or more embodiments, a repository of hypervector and/or hyperspace representations may be generated and used to help with resource demand load estimation.
    Type: Application
    Filed: August 7, 2023
    Publication date: September 12, 2024
    Applicant: DELL PRODUCTS L.P.
    Inventors: William Jeffery WHITE, Said TABET
  • Publication number: 20240303129
    Abstract: Managing the resource demand load for edge systems is significantly more complex than for other systems, such as cloud environments. Embodiments herein provide edge resource demand load estimation systems and methods that inform scheduling and associated edge orchestration to ensure that edge system resource capacity is appropriately utilized. Efficient utilization allows an increased number of applications to be deployed at a reduced level of reserved resources. Also presented are embodiments of assurance mechanisms for monitoring edge resource demand load characterizations. In one or more embodiments, when an estimate or estimates are deemed to not be valid (e.g., having experienced stationary drift), updated estimates may be obtained.
    Type: Application
    Filed: August 7, 2023
    Publication date: September 12, 2024
    Applicant: DELL PRODUCTS L.P.
    Inventors: William Jeffery WHITE, Said TABET
  • Publication number: 20240303130
    Abstract: Managing the resource demand load for edge systems is significantly more complex than for other systems, such as cloud environments. Edge resource demand load scheduling systems and methods are disclosed that can ensure that edge systems operate smoothly and efficiently while balancing multiple scheduling objectives. Scheduling techniques disclosed herein may utilize heuristic rules for candidate edge system selection (e.g., utilizing ARMA/ARIMA averages and/or service level objectives) and modified best fit decreasing (mBFD) assignment/allocation techniques.
    Type: Application
    Filed: August 7, 2023
    Publication date: September 12, 2024
    Applicant: DELL PRODUCTS L.P.
    Inventors: William Jeffery WHITE, Said TABET
  • Publication number: 20240303131
    Abstract: A computer-implemented method for orchestrating execution of workloads on nodes includes determining a set of requirements for resources needed for execution of the workload; determining for each compute node an availability of the resources required; establishing multiple candidate configurations having an assignment of each compute workload to at least one pair of a compute node and a working class, wherein different working classes differ at least in the degree of retention of the compute workload in memory and/or in at least one cache of the compute node after execution; computing for each candidate configuration at least one figure of merit with respect to at least one given optimization goal; and determining a candidate configuration with the best figure of merit as the optimal configuration.
    Type: Application
    Filed: March 7, 2024
    Publication date: September 12, 2024
    Applicant: ABB Schweiz AG
    Inventors: Santonu Sarkar, Marie Christin Platenius-Mohr, Jan Christoph Schlake, Madapu Amarlingam, Nafise Eskandani, Reuben Borrison
  • Publication number: 20240303132
    Abstract: A method and a system for providing a combination of optimal and stable instances are disclosed. The method includes receiving a configuration information of an application for execution; identifying parameters related to the application of user; identifying a set of optimal instances based on the identified parameters; fetching a data of historical spot instance(s) from a host platform; predicting a stability score for each of the optimal spot instances based on at least the data of the historical spot instance(s); predicting an intermediate set of optimal and stable spot instances from the at least one optimal spot instance based on the stability score of the optimal spot instances; and predicting the combination of optimal and stable instances, based on at least on a cost factor and based on at least one of the intermediate set of optimal and stable spot instances, and a set of optimal on-demand instances.
    Type: Application
    Filed: April 19, 2023
    Publication date: September 12, 2024
    Applicant: JPMorgan Chase Bank, N.A.
    Inventors: Rakesh Kumar KASHYAP, Abdul Subhan Shoukat GHOUSE, Srileka VIJAYAKUMAR, Faraz ZAIDI, Keerthi CHIVUKULA
  • Publication number: 20240303133
    Abstract: A method performed in a computing device including a processor and a storage medium in which one or more programs configured to be executable by the processor are written includes receiving configuration information for one of a plurality of processes for building a cloud and processing one of the processes based on the received configuration information, the plurality of processes includes one or more sub-processes, and wherein the processing one of the processes includes determining whether to process a sub-process included in one of the processes based on the configuration information, and processing a sub-process determined to be processed as a result of the determination.
    Type: Application
    Filed: November 9, 2023
    Publication date: September 12, 2024
    Applicant: SAMSUNG SDS CO., LTD.
    Inventors: Misook KIM, Jungchul PARK, Hyengun KIM, Sooyeon YANG, Jongsung YANG
  • Publication number: 20240303134
    Abstract: Managing the resource demand load for edge systems is significantly more complex than for other systems, such as cloud environments. Embodiments herein provide edge resource demand load estimation systems and methods that inform scheduling and associated edge orchestration to ensure that edge system resource capacity is appropriately utilized. Efficient utilization allows an increased number of applications to be deployed at a reduced level of reserved resources. Also presented are embodiments of assurance mechanisms for monitoring edge resource demand load characterizations. In one or more embodiments, when an estimate or estimates are deemed to not be valid (e.g., having experienced stationary drift), updated estimates may be obtained.
    Type: Application
    Filed: August 7, 2023
    Publication date: September 12, 2024
    Applicant: DELL PRODUCTS L.P.
    Inventors: William Jeffery WHITE, Said TABET
  • Publication number: 20240303135
    Abstract: Embodiments of the present disclosure provide a data transmission method. The data transmission method is applied to an operation chip. The operation chip includes a plurality of nodes of a network on chip (NoC), and the method includes: receiving a data processing instruction of target service data, where the data processing instruction carries information about a receiving node and a processing node set; determining a relay processing node in the processing node set based on the receiving node; and transmitting the target service data from the receiving node to the relay processing node, and transmitting the target service data from the relay processing node to another processing node in the processing node set.
    Type: Application
    Filed: March 8, 2024
    Publication date: September 12, 2024
    Inventors: Huatao Zhao, Shengcheng Wang, Yunfan Li, Lide Duan
  • Publication number: 20240303136
    Abstract: Various examples are directed to systems and methods of generating a user interface in a computing system. The computing system may execute a software application comprising a plurality of source objects. A verticalization layer of the software application may receive a request to modify a terminology used by the software application to render source objects from a first terminology to a second terminology. The verticalization layer may access a verticalization object associated with the first source object and the second terminology to obtain at least one second terminology text string associated with the first source object and the second terminology. The verticalization layer may replace the at least one text string at the verticalization data structure with the at least one second terminology text string.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 12, 2024
    Inventor: Thomas Decker
  • Publication number: 20240303137
    Abstract: Systems and methods for content management wherein a client can submit requests to a first API which the forwards the requests to either an IMDB or a gateway to a distributed cluster-computing framework. Requests to the IMDB are serviced and responses from the IMDB are returned to the client. Requests that are forwarded to the gateway are first modified for the distributed cluster-computing framework, and are then parsed by the gateway and used to instantiate processors that generate corresponding requests to the distributed cluster-computing framework. Responsive data from the distributed cluster-computing framework is used to generate responses to the client requests that are forwarded to the first API which modifies them to appear as if they were generated by the IMDB. These modified responses are returned by the first API to the client.
    Type: Application
    Filed: May 15, 2024
    Publication date: September 12, 2024
    Inventors: Marc Rodriguez Sierra, Lalith Subramanian, Carles Bayes Martin
  • Publication number: 20240303138
    Abstract: Provided is a method for parallelly processing a call list associated with an artificial intelligence calculation performed by one or more processors, including acquiring an original call list including a plurality of primitive operations, determining a number of accelerators to parallelly process the original call list, creating a plurality of sub call lists based on the determined number of accelerators and the original call list, and transmitting each of the created plurality of sub call lists to each of a plurality of accelerators corresponding to the determined number.
    Type: Application
    Filed: January 25, 2024
    Publication date: September 12, 2024
    Inventors: Gangwon Jo, Jungho Park
  • Publication number: 20240303139
    Abstract: The present invention provides a robust and effective solution to an entity or an organization by enabling maximization of the utilization of machine resources by optimally allocating the tasks such as application programming interfaces (APIS) in the queue using a set of predetermined instructions. The method further enables finding the number of machines in order to fulfil a cumulative service-level agreement (SLA) of the APIs in the queue using heuristics and the set of predetermined instructions.
    Type: Application
    Filed: October 29, 2022
    Publication date: September 12, 2024
    Inventors: Ameya MUNAGEKAR, Akansha KUMAR, Kamlesh DHONDGE, Akhil Patel PATLOLLA, Rajeev GUPTA
  • Publication number: 20240303140
    Abstract: The present disclosure relates to a communication method for a Flutter Web application and a host program, and a computer device. The method includes: acquiring a communication mode between the Flutter Web application and the host program, where the communication mode includes a WebSocket communication mode and/or a JavaScript communication mode; communicating, according to the communication mode, with the host program through a first communication component, where the first communication component is arranged in the Flutter Web application and realizes information interaction based on a preset information format. The present embodiment can make use of characteristics that a Flutter framework can set a channel, and the Flutter Web application can establish a communication channel with a host program by setting a first communication component.
    Type: Application
    Filed: November 23, 2021
    Publication date: September 12, 2024
    Inventor: Cunqing LI
  • Publication number: 20240303141
    Abstract: A first load cycle of an application is determined to have been completed. A load cycle is where the application has been loaded, executed, and then unloaded. One or more of first load parameter associated with the first load cycle of the application, a first execution parameter associated with the first load cycle of the application, and a first unload parameter associated with the first load cycle of the application are retrieved and compared to one or more of a second load parameter associated with a second load cycle of the application, a second execution parameter associated with the second load cycle of the application, and a second unload parameter associated with the second load cycle of the application. The comparison can then be used to identify anomalies between load cycles of the application.
    Type: Application
    Filed: March 10, 2023
    Publication date: September 12, 2024
    Applicant: MICRO FOCUS LLC
    Inventors: Douglas Max Grover, Michael F. Angelo
  • Publication number: 20240303142
    Abstract: The present disclosure provides a method for controlling a distributed operation system, an apparatus for controlling a distributed operation system, a device, a medium and a program product, which relate to a computer application technology field, and in particular to a distributed operation technology field. A specific implementation includes: for a first container carrying a first process, determining a current fault type of a failure in the first container in response to detecting that the first process is triggered to terminate based on the failure in the first container; and reconstructing the first container and restarting the first process based on the first container reconstructed in response to determining that the current fault type is consistent with a target fault type.
    Type: Application
    Filed: June 7, 2022
    Publication date: September 12, 2024
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Shuaijian Wang, Shiyong Li, Henghua Zhang, Panpan Li, Zaibin Hu, Baotong Luo
  • Publication number: 20240303143
    Abstract: Systems and methods for detection of persistent faults in processing units and memory have been described. In an illustrative, non-limiting embodiment, a Machine Learning (ML) processor includes one or more registers, and a data moving circuit coupled to the one or more registers. The data moving circuit can be configured to select, based upon a first value stored in the one or more registers, an original one of a plurality of parallel handling circuits within the ML processor to obtain an original data processing result. The data moving circuit can also be configured to select, based upon a second value stored in the one or more registers, an alternative one of the plurality of parallel handling circuits to obtain an alternative data processing result that, upon comparison with the original data processing result, provides an indication of a persistent fault in the ML processor.
    Type: Application
    Filed: March 9, 2023
    Publication date: September 12, 2024
    Inventors: Paul Kimelman, Adam Fuks
  • Publication number: 20240303144
    Abstract: An electronic device includes a communication interface for, when a predetermined event occurs, transmitting a data signal generated based on data and a signal processing characteristic value to a memory system, and receiving eye diagram information corresponding to the data signal from the memory system; and a signal processing controller for controlling the signal processing characteristic value, based on an interval change value of the eye diagram information.
    Type: Application
    Filed: August 24, 2023
    Publication date: September 12, 2024
    Inventor: Eun Jae OCK
  • Publication number: 20240303145
    Abstract: An apparatus comprises a processing device configured to detect a given issue encountered on a given computing device, to identify a given cluster of computing devices to which the given computing device belongs, and to determine a similarity between the given issue encountered on the given computing device and one or more historical issues encountered on one or more other computing devices belonging to the given cluster. The processing device is also configured to select, based at least in part on the determined similarity between the given issue and the one or more historical issues, a subset of a plurality of components of the given computing device as target components for log collection. The processing device is further configured to collect logs from the target components and to perform remedial actions determined utilizing the collected logs on the given computing device to resolve the given issue.
    Type: Application
    Filed: March 20, 2023
    Publication date: September 12, 2024
    Inventor: Huijuan Fan
  • Publication number: 20240303146
    Abstract: Techniques are provided for detection and mitigation of malfunctioning components in a cluster computing environment. One method comprises obtaining, by a virtual infrastructure monitor, from a cluster monitor, an indication of a malfunctioning component in a cluster computing environment; selecting a virtual infrastructure server type for a replacement virtual infrastructure server based on a type of the malfunctioning component; creating a replacement virtual infrastructure server based on the selected virtual infrastructure server type and properties of a virtual infrastructure server associated with the malfunctioning component; applying settings to the replacement virtual infrastructure server according to rules for the replacement virtual infrastructure server; deploying a replacement component on the replacement virtual infrastructure server; and providing a notification to the cluster monitor of the replacement component and credentials of the replacement component.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 12, 2024
    Inventors: Alexander Shteingart, Shoham Levy, Alexander Zvansky
  • Publication number: 20240303147
    Abstract: A system, method, and computer-readable medium for performing a data center management and monitoring operation. The data center management and monitoring operation includes: receiving data center data from a plurality of data center assets within a data center, the data center data comprising event data; assigning the data center data to a vectorized input space; reducing a dimension of the vectorized input space to a latent space, the latent space providing an event model dimension; decoding the latent space to provide a vectorized decoded output space; performing a data center data analytics failure forecasting operation using the vectorized decoded output space; and, performing a data center analytics failure time estimation operation, the data center analytics failure time estimation operation generating data center analytics failure time estimation data using the data center asset failure forecasting data.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 12, 2024
    Applicant: Dell Products L.P.
    Inventors: Raja Neogi, Khayam Anjam
  • Publication number: 20240303148
    Abstract: Present disclosure relates to management of artificial intelligence systems by identifying root cause of reduced performance and/or failure in computing systems, and particularly relates to systems and methods for detecting a drift in supervised and unsupervised machine learning (ML) models. The system retrieves current dataset corresponding to output of supervised ML models and unsupervised ML models. Further, the system segregates the current dataset based on requirement of a drift detection model and applies a plurality of drift detection models to the segregated dataset to generate predictive results corresponding to the current dataset. Furthermore, the system determines errors in predictive results by comparing predictive results to reference values associated with current dataset. Additionally, the system detects the drift in supervised ML models and unsupervised ML models based on determined errors being above a threshold value.
    Type: Application
    Filed: January 19, 2023
    Publication date: September 12, 2024
    Inventors: Udaya Kamala GOSALA, Ranchal PRAKASH, George CHERIAN, Raghuram VELEGA
  • Publication number: 20240303149
    Abstract: Methods and systems for anomaly detection include encoding a time series with a time series encoder and encoding an event sequence with an event sequence encoder. A latent code is generated from outputs of the time series encoder and the event sequence encoder. The time series is reconstructed from the latent code using a time series decoder. The event sequence is reconstructed from the latent code using an event sequence decoder. An anomaly score is determined based on a reconstruction loss of the reconstructed time series and a reconstruction loss of the reconstructed event sequence. An action is performed responsive to the anomaly score.
    Type: Application
    Filed: March 8, 2024
    Publication date: September 12, 2024
    Inventors: Yuncong Chen, Haifeng Chen, LuAn Tang, Zhengzhang Chen
  • Publication number: 20240303150
    Abstract: An apparatus disclosed herein includes memory; computer readable instructions; and programmable circuitry to be programmed by the computer readable instructions to: generate a reclamation recommendation based on a subset of entities eligible for reclamation, the subset of the entities meeting a resource requirement of a failed entity; reconfigure the subset of the entities to reclaim resources of the subset of the entities based on the reclamation recommendation; and execute the failed entity using the reclaimed resources of the subset of the entities.
    Type: Application
    Filed: March 9, 2023
    Publication date: September 12, 2024
    Inventors: Devang Dipakbhai Pandya, Krishnamoorthy Balaraman, Rahul Kumar Singh, Gopal Krishna Goalla
  • Publication number: 20240303151
    Abstract: A first set of values reported by an electronic device and not reported by another electronic device over a first period of time that is prior to a firmware release to the electronic device is received. The first set of values is associated with a metric. A set of statistical properties associated with the first set of values is determined. A second set of values reported by the electronic device and not reported by another electronic device over a second period of time that is after the firmware release is received. The second set of values is associated with the metric. A set of statistical properties associated with the second set of values is determined. The set of statistical properties associated with the first set of values and the set of statistical properties associated with the second set of values is compared to detect an anomaly.
    Type: Application
    Filed: September 1, 2023
    Publication date: September 12, 2024
    Applicant: Verkada Inc.
    Inventors: Yu YANG, Hanhong GAO, Han CAO
  • Publication number: 20240303152
    Abstract: A central system coupled to a subsystem receives a fault indication associated with a fault in one or more circuits of the subsystem from a local fault collection and control (FCCS) of the subsystem when a software recovery of the fault fails. Based on the received fault indication, the local FCCS and a central FCCS of a central system is masked from additional fault indications from the one or more circuits. The central system then signals the reset of the one or more circuits of the subsystem after the masking of the additional fault indications, wherein the one or more circuits is reset based on the signaling and the additional faults are masked from one or more of the local FCCS and central FCCS during the reset.
    Type: Application
    Filed: May 2, 2023
    Publication date: September 12, 2024
    Inventors: Hemant Nautiyal, Shruti Singla, Rohan Poudel, Shreya Singh, Sandeep Kumar Arya, Bipin Gupta
  • Publication number: 20240303153
    Abstract: An error correction circuit includes a clock delay circuit configured to receive an input clock, delay the input clock by a desired time period to generate a delayed clock, and output one of the input clock and the delayed clock as an output clock in response to a select signal, an error detection circuit configured to, receive the output clock and input data, generate output data and latch data based on the output clock and the input data, and detect a margin error based on the output data and the latch data, and a control circuit configured to correct the detected margin error, the correcting the margin error including adjusting a level of the select signal based on whether the margin error has been detected.
    Type: Application
    Filed: September 22, 2023
    Publication date: September 12, 2024
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Jeong Hoan PARK, Yeon Soo KWON, Hancheon YUN, Jungyu LEE, Jaeseung JEONG
  • Publication number: 20240303154
    Abstract: A Functional Safety Counter Module is provided and it comprises input circuitry, test circuitry, a first microcontroller including a first hardware counter, a second hardware counter, a first storage device that stores a first firmware algorithm code to execute a counter pattern test in order to detect a short open input signal and/or a failure in counting capability of the first microcontroller and a second microcontroller including a third hardware counter, a fourth hardware counter, a second storage device that stores a second firmware algorithm code. The first and second firmware algorithm codes are configured to resynchronize and restore respectively a first counter or a second counter after the counter pattern test and are configured to detect an offset and adjust during a resynchronization process to account for the offset such that to successfully resynchronize two separate resynchronization algorithm codes are used depending on an input frequency of counter signals input to four hardware counters.
    Type: Application
    Filed: March 11, 2021
    Publication date: September 12, 2024
    Applicant: SIEMENS AKTIENGESELLSCHAFT
    Inventors: Jeffrey Howe, William Keith Bryant, Steven Parfitt, Steven M. Hausman, Thomas Brian Hartley
  • Publication number: 20240303155
    Abstract: Provided is a fault tolerance method which is performed by one or more processors, and which includes receiving an application execute command, executing a main process of an application in response to the execute command, receiving, by a split execution module, information on a plurality of devices associated with the execution of the application from an orchestrator, executing, by the split execution module, a sub-process for each of the plurality of devices using the information on the plurality of devices, and performing, by the split execution module, fault tolerance associated with the execution of the application using an idle device, if a failure occurs in at least some of the plurality of devices.
    Type: Application
    Filed: March 6, 2024
    Publication date: September 12, 2024
    Inventors: Gangwon Jo, Jungho Park
  • Publication number: 20240303156
    Abstract: Data storage circuitry has entries to store data according to a data storage technology supporting non-destructive reads, each entry associated with an error checking code (ECC) and age indication. Scrubbing circuitry performs a patrol scrubbing cycle to visit each entry of the data storage circuitry within a scrubbing period. On a given visit to a given entry, the scrubbing operation comprises determining, based on the age indication associated with the given entry, whether a check-not-required period has elapsed for the given entry, and if so performing an error check on the data of the given entry using the ECC for that entry. The error check is omitted if the check-not-required period has not yet elapsed. The check-not-required period is restarted for a write target entry in response to a request causing an update to the data and the error checking code of the write target entry.
    Type: Application
    Filed: March 9, 2023
    Publication date: September 12, 2024
    Inventors: Andrew David TUNE, Cyrille Nicolas DRAY
  • Publication number: 20240303157
    Abstract: Methods, systems, and devices for memory die fault detection using a calibration pin are described. A memory device may perform a calibration procedure on a first resistor of each of a set of memory dies of a memory module using a pin coupled with the memory module. The memory device may couple the pin to a second resistor of a memory die of the set of memory dies based on the memory die identifying a fault condition for the memory die executing one or more of multiple commands from the host device. The memory device may receive, from the host device, a command to read a register of one or more memory dies of the set of memory dies and may output, to the host device, an indication of the memory die that identified the fault condition based on coupling the pin to the second resistor.
    Type: Application
    Filed: February 22, 2024
    Publication date: September 12, 2024
    Inventors: Scott E. Schaefer, Paul A. Laberge