Process Scheduling Patents (Class 718/102)
-
Patent number: 11829799Abstract: A method, a structure, and a computer system for predicting pipeline training requirements. The exemplary embodiments may include receiving one or more worker node features from one or more worker nodes, extracting one or more pipeline features from one or more pipelines to be trained, and extracting one or more dataset features from one or more datasets used to train the one or more pipelines. The exemplary embodiments may further include predicting an amount of one or more resources required for each of the one or more worker nodes to train the one or more pipelines using the one or more datasets based on one or more models that correlate the one or more worker node features, one or more pipeline features, and one or more dataset features with the one or more resources. Lastly, the exemplary embodiments may include identifying a worker node requiring a least amount of the one or more resources of the one or more worker nodes for training the one or more pipelines.Type: GrantFiled: October 13, 2020Date of Patent: November 28, 2023Assignee: International Business Machines CorporationInventors: Saket Sathe, Gregory Bramble, Long Vu, Theodoros Salonidis
-
Patent number: 11829780Abstract: A system may include a cluster and a module of the cluster. The module may include a user resource definition and a catalog server. The catalog server may maintain a configuration of the cluster.Type: GrantFiled: September 22, 2021Date of Patent: November 28, 2023Assignee: International Business Machines CorporationInventors: Ning Ding, Yongjie Gong, Yao Zhou, Ke Zhao Li, Dan Dan Wang
-
Patent number: 11822961Abstract: A method includes that: a user event to be processed is received; the user event to be processed is stored into an event queue corresponding to an event attribute of the user event to be processed, user events with different event attributes corresponding to different event queues; the user event is read from the event queue through multiple processes and is processed; and the processed user event is deleted from the event queue.Type: GrantFiled: March 26, 2021Date of Patent: November 21, 2023Assignee: Beijing Xiaomi Mobile Software Co., Ltd.Inventors: Fuye Wang, Xiaobing Mao, Zenghui Liu
-
Patent number: 11816713Abstract: A merchant may use an e-commerce platform to sell products to customers on an online store. The merchant may have more than one online store, each with its own separate inventory, orders, domain name (or subdomain), currency, etc. A computer-implemented system and method are provided that allow the merchant to build workflows to automate tasks at the organizational level, i.e. workflows that can incorporate triggers, conditions, and/or actions from and across the different online stores that belong to the merchant.Type: GrantFiled: October 24, 2019Date of Patent: November 14, 2023Assignee: SHOPIFY INC.Inventors: Hanan Ayad, Stanislav Korsei
-
Patent number: 11809219Abstract: A method for executing instructions embedded in two threads stored in a system including two operating units and a virtual managing entity for holding queues for virtual objects (VO) waiting to use a respective operating unit and diverting them between queues. Each VO is associated with two virtual timers, one measuring a time period during which the VO is held in the queue (TIQ) and the other providing time period during which the VO will remain in an alive state (TTL). The method includes receiving information relating to VOs associated with the two threads; operating on VOs for which their TTLs have expired; upon emerging from its respective queue, determining whether each VO should be diverted to another queue; upon diverting the VO, resetting its TIQ timer; and allocating an access time to each VO based on a number of threads requiring that VO and the TIQ associated therewith.Type: GrantFiled: June 18, 2019Date of Patent: November 7, 2023Assignee: DRIVENETS LTD.Inventors: Ori Zakin, Amir Krayden, Or Sadeh, Yuval Lev
-
Patent number: 11809978Abstract: An apparatus to facilitate workload scheduling is disclosed. The apparatus includes one or more clients, one or more processing units to processes workloads received from the one or more clients, including hardware resources and scheduling logic to schedule direct access of the hardware resources to the one or more clients to process the workloads.Type: GrantFiled: April 18, 2022Date of Patent: November 7, 2023Assignee: Intel CorporationInventors: Liwei Ma, Nadathur Rajagopalan Satish, Jeremy Bottleson, Farshad Akhbari, Eriko Nurvitadhi, Chandrasekaran Sakthivel, Barath Lakshmanan, Jingyi Jin, Justin E. Gottschlich, Michael Strickland
-
Patent number: 11809266Abstract: Failure impact analysis (or “impact analysis”) is a process that involves identifying effects of a network event that are may or will results from the network event. In one example, this disclosure describes a method that includes generating, by a control system managing a resource group, a resource graph that models resource and event dependencies between a plurality of resources within the resource group; detecting, by the control system, a first event affecting a first resource of the plurality of resources, wherein the first event is a network event; and identifying, by the control system and based on the dependencies modeled by the resource graph, a second resource that is expected to be affected by the first event.Type: GrantFiled: February 22, 2022Date of Patent: November 7, 2023Assignee: Juniper Networks, Inc.Inventors: Jayanthi R, Javier Antich, Chandrasekhar A
-
Patent number: 11803391Abstract: Devices and techniques for threads in a programmable atomic unit to self-schedule are described herein. When it is determined that an instruction will not complete within a threshold prior to insertion into a pipeline of the processor, a thread identifier (ID) can be passed with the instruction. Here, the thread ID corresponds to a thread of the instruction. When a response to completion of the instruction is received that includes the thread ID, the thread is rescheduled using the thread ID in the response.Type: GrantFiled: October 20, 2020Date of Patent: October 31, 2023Assignee: Micron Technology, Inc.Inventor: Tony Brewer
-
Patent number: 11797284Abstract: A processor may receive a composable deployer comma-separated values (CSV) file. The processor may parse the composable deployer CSV file. The processor may determine if there is a composable deployer foundation template. The processor may install a resource. The resource to install may be associated with the composable deployer foundation template.Type: GrantFiled: July 22, 2021Date of Patent: October 24, 2023Assignee: International Business Machines CorporationInventors: Guang Ya Liu, Xun Pan, Hai Hui Wang, Peng Li, Xiang Zhen Gan
-
Patent number: 11797339Abstract: Systems and methods for maintaining data objects include receiving an event in a queue indicating a change to a data source; obtaining data corresponding to the event from the data source; determining that a monitored item condition defined in a workflow is satisfied based on the data corresponding to the event; generating a data object responsive to the monitored item condition being satisfied; identifying, using a mapping between fields and triggers generated based on the workflow, a trigger defined in the workflow that uses a first field of one or more fields; determining that the value of the first field satisfies a trigger condition of the trigger; and performing, responsive to determining that the value satisfies the trigger condition, an action corresponding to the trigger defined in the workflow.Type: GrantFiled: March 31, 2023Date of Patent: October 24, 2023Assignee: TONKEAN, INC.Inventors: Sagi Eliyahu, Offir Talmor
-
Patent number: 11789788Abstract: In one implementation, systems and methods are provided for processing digital experience information. A computer-implemented system for processing digital experience information may comprise a central data location. The central data location may comprise a connector that may be configured to receive information belonging to a category from an information source; an event backbone that may be configured to route the information received by the connector based on the category; a translator that may be configured to transform the received information into a common data model; and a database that may be configured to store the received information. The event backbone may be further configured to send information to the connector from the event backbone and the database based on one or more criteria.Type: GrantFiled: November 18, 2020Date of Patent: October 17, 2023Assignee: The PNC Financial Services Group, Inc.Inventor: Michael Nitsopoulos
-
Patent number: 11789895Abstract: Embodiments described herein provide an on-chip heterogeneous Artificial Intelligence (AI) processor comprising at least two different architectural types of computation units, wherein each of the computation units is associated with a respective task queue configured to store computation subtasks to be executed by the computation unit. The AI processor also comprises a controller configured to partition a received computation graph associated with a neural network into a plurality of computation subtasks according to a preset scheduling strategy and distribute the computation subtasks to the task queues of the computation units. The AI processor further comprises a storage unit configured to store data required by the computation units to execute their respective computation subtasks and an access interface configured to access an off-chip memory. Different application tasks are processed by managing and scheduling the different architectural types of computation units in an on-chip heterogeneous manner.Type: GrantFiled: March 9, 2020Date of Patent: October 17, 2023Assignee: SHANGHAI DENGLIN TECHNOLOGIES CO., LTD.Inventors: Ping Wang, Jianwen Li
-
Patent number: 11789782Abstract: Systems, devices, and methods discussed herein are directed to intelligently adjusting the set of worker nodes within a computing cluster. By way of example, a computing device (or service) may monitor performance metrics of a set of worker nodes of a computing cluster. When a performance metric is detected that is below a performance threshold, the computing device may perform a first adjustment (e.g., an increase or decrease) to the number of nodes in the cluster. Training data may be obtained based at least in part on the first adjustment and utilized with supervised learning techniques to train a machine-learning model to predict future performance changes in the cluster. Subsequent performance metrics and/or cluster metadata may be provided to the machine-learning model to obtain output indicating a predicted performance change. An additional adjustment to the number of worker nodes may be performed based at least in part on the output.Type: GrantFiled: January 26, 2023Date of Patent: October 17, 2023Assignee: Oracle International CorporationInventors: Sandeep Akinapelli, Devaraj Das, Devarajulu Kavali, Puneet Jaiswal, Velimir Radanovic
-
Patent number: 11789876Abstract: A device including an interface with peripherals includes a first interface that receives a request from a host, a second interface that periodically receives at least one first sample input from the peripherals in response to the request from the host, a memory that stores an active time table including a processing time of a sample input provided by each of the peripherals in each of a plurality of operating conditions respectively corresponding to different power consumptions, and a processing circuit that identifies at least one of the plurality of operating conditions based on the active time table and a period of the at least one first sample input.Type: GrantFiled: March 4, 2021Date of Patent: October 17, 2023Inventors: Boojin Kim, Sukmin Kang, Shinkyu Park, Boyoung Kim, Sukwon Ryoo
-
Method and apparatus for improvements to moving picture experts group network based media processing
Patent number: 11782751Abstract: A method of processing media content in Moving Picture Experts Group (MPEG) Network Based Media Processing (NBMP) may include obtaining, from an NBMP source, a workflow having a workflow descriptor (WD) indicating a workflow descriptor document (WDD); based on the workflow, obtaining a task having a task descriptor (TD) indicating a task descriptor document (TDD); based on the task, obtaining, from a function repository, a function having a function descriptor (FD) indicating a function descriptor document (FDD); and processing the media content, using the workflow, the task, and the function.Type: GrantFiled: April 14, 2020Date of Patent: October 10, 2023Assignee: TENCENT AMERICA LLCInventor: Iraj Sodagar -
Patent number: 11782757Abstract: A machine learning network is implemented by executing a computer program of instructions on a machine learning accelerator (MLA) comprising a plurality of interconnected storage elements (SEs) and processing elements (PEs). The instructions are partitioned into blocks, which are retrieved from off-chip memory. The block includes a set of deterministic instructions (MLA instructions) to be executed by on-chip storage elements and/or processing elements according to a static schedule from a compiler. The MLA instructions may require data retrieved from off-chip memory by memory access instructions contained in prior blocks. The compiler also schedules the memory access instructions in a manner that avoids contention for access to the off-chip memory. By avoiding contention, the execution time of off-chip memory accesses becomes predictable enough and short enough that the memory access instructions may be scheduled so that they are known to complete before the retrieved data is required.Type: GrantFiled: May 7, 2021Date of Patent: October 10, 2023Assignee: SiMa Technologies, Inc.Inventor: Reed Kotler
-
Patent number: 11782760Abstract: A method for executing applications in a system comprising general hardware and reconfigurable hardware includes accessing a first execution file comprising metadata storing a first priority indicator associated with a first application, and a second execution file comprising metadata storing a second priority indicator associated with a second application. In an example, use of the reconfigurable hardware is interleaved between the first application and the second application, and the interleaving is scheduled to take into account (i) workload of the reconfigurable hardware and (ii) the first priority indicator and the second priority indicator associated with the first application and the second application, respectively. In an example, when the reconfigurable hardware is used by one of the first and second applications, the general hardware is used by another of the first and second applications.Type: GrantFiled: February 25, 2021Date of Patent: October 10, 2023Assignee: SambaNova Systems, Inc.Inventors: Anand Misra, Arnav Goel, Qi Zheng, Raghunath Shenbagam, Ravinder Kumar
-
Patent number: 11768718Abstract: In one implementation, systems and methods are provided for processing digital experience information. A computer-implemented system for processing digital experience information may comprise a central data location. The central data location may comprise a connector that may be configured to receive information belonging to a category from an information source; an event backbone that may be configured to route the information received by the connector based on the category; a translator that may be configured to transform the received information into a common data model; and a database that may be configured to store the received information. The event backbone may be further configured to send information to the connector from the event backbone and the database based on one or more criteria.Type: GrantFiled: November 18, 2020Date of Patent: September 26, 2023Assignee: The PNC Financial Services Group, Inc.Inventor: Michael Nitsopoulos
-
Patent number: 11755360Abstract: A computer-implemented method for detecting bottlenecks in microservice cloud systems is provided including identifying a plurality of nodes within one or more clusters associated with a plurality of containers, collecting thread profiles and network connectivity data by periodically dumping stacks of threads and identifying network connectivity status of one or more containers of the plurality of containers, classifying the stacks of threads based on a plurality of thread states, constructing a microservice dependency graph from the network connectivity data, aligning the plurality of nodes to bar graphs to depict an average number of working threads in a corresponding microservice, and generating, on a display, an illustration outlining the plurality of thread states, each of the plurality of thread states having a different representation.Type: GrantFiled: July 14, 2021Date of Patent: September 12, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Tatsushi Inagaki, Yohei Ueda, Tatsuhiro Chiba, Marcelo Carneiro Do Amaral, Sunyanan Choochotkaew, Qi Zhang
-
Patent number: 11748160Abstract: Load balancing processes are performed in an observability pipeline system comprising a plurality of computing resources. In some aspects, the observability pipeline system defines a leader role and worker roles. A plurality of computing jobs each include computing tasks associated with event data. The leader role dispatches the computing tasks to the worker roles according to a least in-flight task dispatch criteria, which includes iteratively: identifying an available worker role; identifying one or more incomplete computing jobs; selecting, from the one or more incomplete computing jobs, a computing job that has the least number of in-flight computing tasks currently being executed in the observability pipeline system; identifying a next computing task from the selected computing job; and dispatching the next computing task to the available worker role. The worker roles execute the computing tasks by applying an observability pipeline process to the event data associated with the respective computing task.Type: GrantFiled: June 14, 2021Date of Patent: September 5, 2023Assignee: Cribl, Inc.Inventors: Dritan Bitincka, Ledion Bitincka, Nicholas Robert Romito, Clint Sharp
-
Patent number: 11740793Abstract: A data storage system having non-volatile media, a buffer memory, a processing device, and a data pre-fetcher. The data pre-fetcher receives commands to be executed in the data storage system, provides the commands as input to a predictive model, obtains at least one command identified for pre-fetching, as output from the predictive model having the commands as input. Prior to the command being executed in the data storage device, the data pre-fetcher retrieves, from the non-volatile memory, at least a portion of data to be used in execution of the command; and stores the portion of data in the buffer memory. The retrieving and storing the portion of the data can be performed concurrently with the execution of many commands before the execution of the command, to reduce the latency impact of the command on other commands that are executed concurrently with the execution of the command.Type: GrantFiled: November 3, 2020Date of Patent: August 29, 2023Assignee: Micron Technology, Inc.Inventors: Alex Frolikov, Zachary Andrew Pete Vogel, Joe Gil Mendes, Chandra Mouli Guda
-
Patent number: 11734091Abstract: A remote procedure call channel for interprocess communication in a managed code environment ensures thread-affinity on both sides of an interprocess communication. Using the channel, calls from a first process to a second process are guaranteed to run on a same thread in a target process. Furthermore, calls from the second process back to the first process will also always execute on the same thread. An interprocess communication manager that allows thread affinity and reentrancy is able to correctly keep track of the logical thread of execution so calls are not blocked in unmanaged hosts. Furthermore, both unmanaged and managed hosts are able to make use of transparent remote call functionality provided by an interprocess communication manager for the managed code environment.Type: GrantFiled: December 1, 2020Date of Patent: August 22, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Jackson M. Davis, John A. Shepard
-
Patent number: 11736413Abstract: Example methods and systems for a programmable virtual network interface controller (VNIC) to perform packet processing are described. In one example, the programmable VNIC may modify a packet processing pipeline based on the instruction. The modification may include injecting a second packet processing stage among the multiple first packet processing stages of the packet processing pipeline. In response to detecting an ingress packet that requires processing by the programmable VNIC, the ingress packet may be steered towards the modified packet processing pipeline. The ingress packet may then be processed using the modified packet processing pipeline by performing the second packet processing stage (a) to bypass at least one of the multiple first processing stages, or (b) in addition to the multiple first processing stages.Type: GrantFiled: January 15, 2021Date of Patent: August 22, 2023Assignee: VMWARE, INC.Inventors: Yong Wang, Boon Seong Ang, Wenyi Jiang, Guolin Yang
-
Patent number: 11734066Abstract: Generally discussed herein are devices, systems, and methods for scheduling tasks to be completed by resources. A method can include identifying features of the task, the features including a time-dependent feature and a time-independent feature, the time-dependent feature indicating a time the task is more likely to be successfully completed by the resource, converting the features to feature values based on a predefined mapping of features to feature values in a first memory device, determining, by a gradient boost tree model and based on a first current time and the feature values, a likelihood the resource will successfully complete the task, and scheduling the task to be performed by the resource based on the determined likelihood.Type: GrantFiled: January 8, 2020Date of Patent: August 22, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Jinchao Li, Yu Wang, Karan Srivastava, Jianfeng Gao, Prabhdeep Singh, Haiyuan Cao, Xinying Song, Hui Su, Jaideep Sarkar
-
Patent number: 11726936Abstract: A system can include a plurality of processors. Each processor of the plurality of processors can be configured to execute program code. The system can include a direct memory access system configured for multi-processor operation. The direct memory access system can include a plurality of data engines coupled to a plurality of interfaces via a plurality of switches. The plurality of switches can be programmable to couple different ones of the plurality of data engines to different ones of the plurality of processors for performing direct memory access operations based on a plurality of host profiles corresponding to the plurality of processors.Type: GrantFiled: December 3, 2021Date of Patent: August 15, 2023Inventors: Chandrasekhar S. Thyamagondlu, Darren Jue, Ravi Sunkavalli, Akhil Krishnan, Tao Yu, Kushagra Sharma
-
Patent number: 11720156Abstract: An electronic device includes a connection unit including a first terminal for receiving power from a power supply apparatus and a second terminal for receiving power supply capability of the power supply apparatus, a communication control unit that performs communication with the power supply apparatus via the second terminal, and a power control unit that performs a process for limiting power supplied from the power supply apparatus to a predetermined power or less in a case where the power supply capability is received from the power supply apparatus.Type: GrantFiled: July 22, 2020Date of Patent: August 8, 2023Assignee: CANON KABUSHIKI KAISHAInventors: Yuki Tsujimoto, Hiroki Kitanosako, Masashi Yoshida
-
Patent number: 11709467Abstract: A time optimal speed planning method and system based on constraint classification. The method comprises: reading path information and carrying out curve fitting to obtain a path curve; sampling the path curve, and considering static constraint to obtain a static upper bound value of a speed curve; considering dynamic constraint, and combining the static upper bound value of the speed curve to construct a time optimal speed model; carrying out convex transformation on the time optimal speed model to obtain a convex model; and solving the convex model based on a quadratic sequence planning method to obtain a final speed curve. The system comprises: a path curve module, a static constraint module, a dynamic constraint module, a model transformation module and a solving module.Type: GrantFiled: November 22, 2022Date of Patent: July 25, 2023Assignee: GUANGDONG UNIVERSITY OF TECHNOLOGYInventors: Jian Gao, Guixin Zhang, Lanyu Zhang, Haixiang Deng, Yun Chen, Yunbo He, Xin Chen
-
Patent number: 11709718Abstract: A barrier synchronization circuit that performs barrier synchronization of a plurality of processes executed in parallel by a plurality of processing circuits, the barrier synchronization circuit includes a first determination circuit configured to determine whether the number of first processing circuits among the plurality of the processing circuits is equal to or greater than a first threshold value, the first processing circuits having completed the process, and an instruction circuit configured to instruct a second processing circuit among the plurality of the processing circuits to forcibly stop the process when it is determined that the number is equal to or greater than the first threshold value by the first determination circuit, the second processing circuit having not completed the process.Type: GrantFiled: September 8, 2020Date of Patent: July 25, 2023Assignee: FUJITSU LIMITEDInventors: Kanae Nakagawa, Masaki Arai, Yasumoto Tomita
-
Patent number: 11709667Abstract: In a symmetric hardware accelerator system, an initial hardware accelerator is selected for an upgrade of firmware. The initial and other hardware accelerators handle workloads that have been balanced across the hardware accelerators. Workloads are rebalanced by directing workloads having low CPU utilization to the initial hardware accelerator. A CPU fallback is conducted of the workloads of the initial hardware accelerator to the CPU. While the CPU is handling the workloads, firmware of the initial hardware accelerator is upgraded.Type: GrantFiled: June 14, 2021Date of Patent: July 25, 2023Assignee: EMC IP Holding Company LLCInventors: Tao Chen, Yong Zou, Ran Liu
-
Patent number: 11704249Abstract: Aspects of a storage device including a memory and a controller are provided. The controller may receive a prefetch request to retrieve data for a host having a promoted stream. The controller may access a frozen time table indicating hosts for which data has been prefetched and frozen times associated with the host and other hosts. The controller can determine whether the host has a higher priority over other hosts included in the frozen time table based on corresponding frozen times and data access parameters associated with the host. The controller may determine to prefetch the data for the host in response to the prefetch request when the host has a higher priority than the other hosts. The controller can receive a host read command associated with the promoted stream from the host and provide the prefetched data to the host in response to the host read command.Type: GrantFiled: June 22, 2021Date of Patent: July 18, 2023Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.Inventors: Adarsh Sreedhar, Ramanathan Muthiah
-
Patent number: 11704153Abstract: A system for storing and extracting elements according to their priority takes into account not only the priorities of the elements but also three additional parameters, namely, a priority resolution p? and two priority limits pmin and pmax. By allowing an ordering error if the difference in the priorities of elements are within the priority resolution, an improvement in performance is achieved.Type: GrantFiled: June 23, 2021Date of Patent: July 18, 2023Assignee: Reservoir Labs, Inc.Inventor: Jordi Ros-Giralt
-
Patent number: 11704429Abstract: An information computer system is provided for securely releasing time-sensitive information to recipients via a blockchain. A submitter submits a document to the system and a blockchain transaction is generated and submitted to the blockchain based on the document (e.g., the document is included as part of the blockchain transaction). An editor may edit the document and an approver may approve the document for release to the recipients. Each modification and/or approval of the document is recorded as a separate transaction on the blockchain where each of the submitter, editor, approver, and recipients interact with the blockchain with corresponding unique digital identifiers—such as private keys.Type: GrantFiled: October 28, 2021Date of Patent: July 18, 2023Assignee: NASDAQ, INC.Inventors: Akbar Ansari, Thomas Fay, Dominick Paniscotti
-
Patent number: 11693668Abstract: A parallel processing apparatus includes a plurality of compute nodes, and a job management device that allocates computational resources of the plurality of compute nodes to jobs, the job management device including circuitry configured to determine a resource search time range based on respective scheduled execution time periods of a plurality of jobs including a job being executed and a job waiting for execution, and search for free computational resources to be allocated to a job waiting for execution that is a processing target among the plurality of jobs, from among computational resources of the plurality of compute nodes within the resource search time range, by backfill scheduling.Type: GrantFiled: April 24, 2020Date of Patent: July 4, 2023Assignee: FUJITSU LIMITEDInventor: Akitaka Iwata
-
Patent number: 11693697Abstract: A computer-implemented method, a computer program product, and a computer system for optimizing workload placements in a system of multiple platforms as a service. A computer first places respective workloads on respective platforms that yield lowest costs for the respective workloads. The computer determines whether mandatory constraints are satisfied. The computer checks best effort constraints, in response to the mandatory constraints being satisfied. The computer determines a set of workloads for which the best effort constraints are not satisfied and determines a set of candidate platforms that yield the lowest costs and enable the best effort constraints to be satisfied. From the set of workloads, the computer selects a workload that has a lowest upgraded cost and updates the workload by setting an upgraded platform index.Type: GrantFiled: December 6, 2020Date of Patent: July 4, 2023Assignee: International Business Machines CorporationInventor: Lior Aronovich
-
Patent number: 11687364Abstract: An apparatus is configured to collect information related to a first activity and analyze the collected information to determine decision data. The information is stored in a first list of the source processing core for scheduling execution of the activity by a destination processing core to avoid cache misses. The source processing core is configured to transmit information related to the decision data using an interrupt, to a second list associated with a scheduler of the destination processing core, if the destination processing core is currently executing a second activity having a lower priority than the first activity.Type: GrantFiled: July 16, 2020Date of Patent: June 27, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Raju Udava Siddappa, Chandan Kumar, Kamal Kishore, Tushar Vrind, Venkata Raju Indukuri, Balaji Somu Kandasamy
-
Patent number: 11687055Abstract: The present disclosure is intended to enable a user to grasp a state of load on an arithmetic processing unit (100, 200) so that the user can stop an excessive function of the arithmetic processing unit (100, 200), or can transfer part of arithmetic processes to another arithmetic processing unit (100, 200) with a small load. Included are the arithmetic processing unit (100, 200) that executes a plurality of processes related to servo control processing; and an observation unit (300) that determines at least one of point-of-time information about start of each of the processes executed by the arithmetic processing unit or point-of-time information about end of each of the processes executed by the arithmetic processing unit; and an output unit (400) that calculates information about usage of the arithmetic processing unit based on the point-of-time information determined by the observation unit, and outputs the calculated information.Type: GrantFiled: August 13, 2020Date of Patent: June 27, 2023Assignee: FANUC CORPORATIONInventors: Wei Luo, Satoshi Ikai, Tsutomu Nakamura
-
Patent number: 11681546Abstract: Methods and apparatuses are provided for data processing. The method includes receiving a first data packet and a second data packet; associating first codes with the first data packet and second codes with the second data packet to generate a combined data packet after receiving the first data packet and the second data packet, wherein the first codes and the second codes specify processing to be performed to the a combined data packet; generating the combined data packet comprising the first data packet and the second data packet in response to determining that the first data packet and the second data packet are correlated; and performing the processing to the combined data packet in accordance with the first codes or the second codes.Type: GrantFiled: April 28, 2021Date of Patent: June 20, 2023Assignee: Dongfang Jingyuan Electron LimitedInventors: Zhaoli Zhang, Weimin Ma, Naihong Tang
-
Patent number: 11677681Abstract: Systems and methods for allocating computing resources within a distributed computing system are disclosed. Computing resources such as CPUs, GPUs, network cards, and memory are allocated to jobs submitted to the system by a scheduler. System configuration and interconnectivity information is gathered by a mapper and used to create a graph. Resource allocation is optimized based on one or more quality of service (QoS) levels determined for the job. Job performance characterization, affinity models, computer resource power consumption, and policies may also be used to optimize the allocation of computing resources.Type: GrantFiled: July 29, 2021Date of Patent: June 13, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Max Alt, Paulo Roberto Pereira de Souza filho
-
Patent number: 11664025Abstract: The present disclosure is generally directed to the generation of voice-activated data flows in interconnected network. The voice-activated data flows can include input audio signals that include a request and are detected at a client device. The client device can transmit the input audio signal to a data processing system, where the input audio signal can be parsed and passed to the data processing system of a service provider to fulfill the request in the input audio signal. The present solution is configured to conserve network resources by reducing the number of network transmissions needed to fulfill a request.Type: GrantFiled: May 28, 2021Date of Patent: May 30, 2023Assignee: GOOGLE LLCInventors: Gaurav Bhaya, Ulas Kirazci, Bradley Abrams, Adam Coimbra, Ilya Firman, Carey Radebaugh
-
Patent number: 11663044Abstract: The invention relates to an apparatus for second offloads in a graphics processing unit (GPU). The apparatus includes an engine; and a compute unit (CU). The engine is arranged operably to store an operation table including entries. The CU is arranged operably to fetch computation codes including execution codes, and synchronization requests; execute each execution code; and send requests to the engine in accordance with the synchronization requests for instructing the engine to allow components inside or outside of the GPU to complete operations in accordance with the entries of the operation table.Type: GrantFiled: July 2, 2021Date of Patent: May 30, 2023Assignee: Shanghai Biren Technology Co., LtdInventors: HaiChuan Wang, Song Zhao, GuoFang Jiao, ChengPing Luo, Zhou Hong
-
Patent number: 11663012Abstract: Disclosed herein are systems and method for detecting coroutines. A method may include: identifying an application running on a computing device, wherein the application includes a plurality of coroutines; determining an address of a common entry point for coroutines, wherein the common entry point is found in a memory of the application; identifying, using an injected code, at least one stack trace entry for the common entry point; detecting coroutine context data based on the at least one stack trace entry; adding an identifier of a coroutine associated with the coroutine context data to a list of detected coroutines; and storing the list of detected coroutines in target process memory associated with the application.Type: GrantFiled: November 29, 2021Date of Patent: May 30, 2023Assignee: Cloud Linux Software Inc.Inventors: Igor Seletskiy, Pavel Boldin
-
Patent number: 11662961Abstract: Delivery form information on a print product is acquired, position information indicating a position of quality inspection on the print product is generated in accordance with the acquired delivery form information, and quality report data including the position information is generated.Type: GrantFiled: February 4, 2022Date of Patent: May 30, 2023Assignee: Canon Kabushiki KaishaInventors: Yoshiji Kanamoto, Toshihiko Iida, Kimio Hayashi
-
Patent number: 11656675Abstract: A method of operating an application processor including a central processing unit (CPU) with at least one core and a memory interface includes measuring, during a first period, a core active cycle of a period in which the at least one core performs an operation to execute instructions and a core idle cycle of a period in which the at least one core is in an idle state, generating information about a memory access stall cycle of a period in which the at least one core accesses the memory interface in the core active cycle, correcting the core active cycle using the information about the memory access stall cycle to calculate a load on the at least one core using the corrected core active cycle, and performing a DVFS operation on the at least one core using the calculated load on the at least one core.Type: GrantFiled: May 9, 2022Date of Patent: May 23, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Seok-Ju Yoon, Nak-Woo Sung, Seung-Chull Suh, Taek-Ki Kim, Jae-Joon Yoo, Eun-Ok Jo
-
Patent number: 11658920Abstract: Embodiments are described for an autonomously and dynamically allocating resources in a distributed network based on forecasted a-priori CPU resource utilization, rather than a manual throttle setting. A multivariate (CPU idle %, disk I/O, network and memory) rather than single variable approach for Probabilistic Weighted Fuzzy Time Series (PWFTS) is used for forecasting compute resources. The dynamic throttling is combined with an adaptive compute change rate detection and correction. A single spike detection and removal mechanism is used to prevent the application of too many frequent throttling changes. Such a method can be implemented for several use cases including, but not limited to: cloud data migration, replication to a storage server, system upgrades, bandwidth throttling in storage networks, and garbage collection.Type: GrantFiled: May 5, 2021Date of Patent: May 23, 2023Assignee: EMC IP Holding Company LLCInventors: Rahul Deo Vishwakarma, Jayanth Kumar Reddy Perneti, Gopal Singh
-
Patent number: 11656910Abstract: The disclosure provides a task segmentation device and method, a task processing device and method, a multi-core processor. The task segmentation device includes a granularity task segmentation unit configured to segment a task by adopting at least one granularity to form subtasks, and a task segmentation granularity selection unit configured to select the granularity to be adopted.Type: GrantFiled: November 25, 2019Date of Patent: May 23, 2023Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTDInventors: Tianshi Chen, Shengyuan Zhou, Shaoli Liu
-
Patent number: 11654552Abstract: Provided are systems and methods for training a robot. The method commences with collecting, by the robot, sensor data from a plurality of sensors of the robot. The sensor data may be related to a task being performed by the robot based on an artificial intelligence (AI) model. The method may further include determining, based on the sensor data and the AI model, that a probability of completing the task is below a threshold. The method may continue with sending a request for operator assistance to a remote computing device and receiving, in response to sending the request, teleoperation data from the remote computing device. The method may further include causing the robot to execute the task based on the teleoperation data. The method may continue with generating training data based on the sensor data and results of execution of the task for updating the AI model.Type: GrantFiled: July 29, 2020Date of Patent: May 23, 2023Assignee: TruPhysics GmbHInventor: Albert Groz
-
Patent number: 11657001Abstract: A management technology for mapping data of a non-volatile memory is shown. A controller establishes a first mapping table and a second mapping table. By looking up the first mapping table, the controller maps a first logical address issued by the host for data reading to a first block substitute. By looking up the second mapping table, the controller maps the first block substitute to a first physical block of the non-volatile memory. The first mapping table further records a first offset for the first logical address. According to the first offset recorded in the first mapping table, the first logical address is mapped to a first data management unit having the first offset in the first physical block represented by the first block substitute.Type: GrantFiled: January 26, 2022Date of Patent: May 23, 2023Assignee: SILICON MOTION, INC.Inventor: Sheng-Hsun Lin
-
Patent number: 11650846Abstract: The present disclosure relates to a method, device and computer program product for processing a job. In a method, a first group of tasks in a first portion of a job are obtained based on a job description of the job from a client. The first group of tasks are allocated to a first group of processing devices in a distributed processing system, respectively, so that the first group of processing devices generate a first group of task results of the first group of tasks, respectively, the first group of processing devices being located in a first processing system based on a cloud and a second processing system based on blockchain. The first group of task results of the first group of tasks are received from the first group of processing devices, respectively. A job result of the job is generated at least partly based on the first group of task results.Type: GrantFiled: February 21, 2020Date of Patent: May 16, 2023Assignee: EMC IP Holding Company LLCInventors: Pengfei Wu, YuHong Nie, Jinpeng Liu
-
Patent number: 11645226Abstract: Embodiments are directed to a processor having a functional slice architecture. The processor is divided into tiles (or functional units) organized into a plurality of functional slices. The functional slices are configured to perform specific operations within the processor, which includes memory slices for storing operand data and arithmetic logic slices for performing operations on received operand data (e.g., vector processing, matrix manipulation). The processor includes a plurality of functional slices of a module type, each functional slice having a plurality of tiles. The processor further includes a plurality of data transport lanes for transporting data in a direction indicated in a corresponding instruction. The processor also includes a plurality of instruction queues, each instruction queue associated with a corresponding functional slice of the plurality of functional slices, wherein the instructions in the instruction queues comprise a functional slice specific operation code.Type: GrantFiled: March 17, 2022Date of Patent: May 9, 2023Assignee: Groq, Inc.Inventors: Dennis Charles Abts, Jonathan Alexander Ross, John Thompson, Gregory Michael Thorson
-
Patent number: 11645213Abstract: A data processing system includes a memory system including a memory device storing data and a controller performing a data program operation or a data read operation with the memory device, and a host suitable for requesting the data program operation or the data read operation from the memory system. The controller can perform a serial communication to control a memory which is arranged outside the memory system and engaged with the host.Type: GrantFiled: April 4, 2022Date of Patent: May 9, 2023Assignee: SK hynix Inc.Inventor: Jong-Min Lee