Process Scheduling Patents (Class 718/102)
  • Patent number: 11789782
    Abstract: Systems, devices, and methods discussed herein are directed to intelligently adjusting the set of worker nodes within a computing cluster. By way of example, a computing device (or service) may monitor performance metrics of a set of worker nodes of a computing cluster. When a performance metric is detected that is below a performance threshold, the computing device may perform a first adjustment (e.g., an increase or decrease) to the number of nodes in the cluster. Training data may be obtained based at least in part on the first adjustment and utilized with supervised learning techniques to train a machine-learning model to predict future performance changes in the cluster. Subsequent performance metrics and/or cluster metadata may be provided to the machine-learning model to obtain output indicating a predicted performance change. An additional adjustment to the number of worker nodes may be performed based at least in part on the output.
    Type: Grant
    Filed: January 26, 2023
    Date of Patent: October 17, 2023
    Assignee: Oracle International Corporation
    Inventors: Sandeep Akinapelli, Devaraj Das, Devarajulu Kavali, Puneet Jaiswal, Velimir Radanovic
  • Patent number: 11789876
    Abstract: A device including an interface with peripherals includes a first interface that receives a request from a host, a second interface that periodically receives at least one first sample input from the peripherals in response to the request from the host, a memory that stores an active time table including a processing time of a sample input provided by each of the peripherals in each of a plurality of operating conditions respectively corresponding to different power consumptions, and a processing circuit that identifies at least one of the plurality of operating conditions based on the active time table and a period of the at least one first sample input.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: October 17, 2023
    Inventors: Boojin Kim, Sukmin Kang, Shinkyu Park, Boyoung Kim, Sukwon Ryoo
  • Patent number: 11789895
    Abstract: Embodiments described herein provide an on-chip heterogeneous Artificial Intelligence (AI) processor comprising at least two different architectural types of computation units, wherein each of the computation units is associated with a respective task queue configured to store computation subtasks to be executed by the computation unit. The AI processor also comprises a controller configured to partition a received computation graph associated with a neural network into a plurality of computation subtasks according to a preset scheduling strategy and distribute the computation subtasks to the task queues of the computation units. The AI processor further comprises a storage unit configured to store data required by the computation units to execute their respective computation subtasks and an access interface configured to access an off-chip memory. Different application tasks are processed by managing and scheduling the different architectural types of computation units in an on-chip heterogeneous manner.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: October 17, 2023
    Assignee: SHANGHAI DENGLIN TECHNOLOGIES CO., LTD.
    Inventors: Ping Wang, Jianwen Li
  • Patent number: 11789788
    Abstract: In one implementation, systems and methods are provided for processing digital experience information. A computer-implemented system for processing digital experience information may comprise a central data location. The central data location may comprise a connector that may be configured to receive information belonging to a category from an information source; an event backbone that may be configured to route the information received by the connector based on the category; a translator that may be configured to transform the received information into a common data model; and a database that may be configured to store the received information. The event backbone may be further configured to send information to the connector from the event backbone and the database based on one or more criteria.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: October 17, 2023
    Assignee: The PNC Financial Services Group, Inc.
    Inventor: Michael Nitsopoulos
  • Patent number: 11782751
    Abstract: A method of processing media content in Moving Picture Experts Group (MPEG) Network Based Media Processing (NBMP) may include obtaining, from an NBMP source, a workflow having a workflow descriptor (WD) indicating a workflow descriptor document (WDD); based on the workflow, obtaining a task having a task descriptor (TD) indicating a task descriptor document (TDD); based on the task, obtaining, from a function repository, a function having a function descriptor (FD) indicating a function descriptor document (FDD); and processing the media content, using the workflow, the task, and the function.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: October 10, 2023
    Assignee: TENCENT AMERICA LLC
    Inventor: Iraj Sodagar
  • Patent number: 11782760
    Abstract: A method for executing applications in a system comprising general hardware and reconfigurable hardware includes accessing a first execution file comprising metadata storing a first priority indicator associated with a first application, and a second execution file comprising metadata storing a second priority indicator associated with a second application. In an example, use of the reconfigurable hardware is interleaved between the first application and the second application, and the interleaving is scheduled to take into account (i) workload of the reconfigurable hardware and (ii) the first priority indicator and the second priority indicator associated with the first application and the second application, respectively. In an example, when the reconfigurable hardware is used by one of the first and second applications, the general hardware is used by another of the first and second applications.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: October 10, 2023
    Assignee: SambaNova Systems, Inc.
    Inventors: Anand Misra, Arnav Goel, Qi Zheng, Raghunath Shenbagam, Ravinder Kumar
  • Patent number: 11782757
    Abstract: A machine learning network is implemented by executing a computer program of instructions on a machine learning accelerator (MLA) comprising a plurality of interconnected storage elements (SEs) and processing elements (PEs). The instructions are partitioned into blocks, which are retrieved from off-chip memory. The block includes a set of deterministic instructions (MLA instructions) to be executed by on-chip storage elements and/or processing elements according to a static schedule from a compiler. The MLA instructions may require data retrieved from off-chip memory by memory access instructions contained in prior blocks. The compiler also schedules the memory access instructions in a manner that avoids contention for access to the off-chip memory. By avoiding contention, the execution time of off-chip memory accesses becomes predictable enough and short enough that the memory access instructions may be scheduled so that they are known to complete before the retrieved data is required.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: October 10, 2023
    Assignee: SiMa Technologies, Inc.
    Inventor: Reed Kotler
  • Patent number: 11768718
    Abstract: In one implementation, systems and methods are provided for processing digital experience information. A computer-implemented system for processing digital experience information may comprise a central data location. The central data location may comprise a connector that may be configured to receive information belonging to a category from an information source; an event backbone that may be configured to route the information received by the connector based on the category; a translator that may be configured to transform the received information into a common data model; and a database that may be configured to store the received information. The event backbone may be further configured to send information to the connector from the event backbone and the database based on one or more criteria.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: September 26, 2023
    Assignee: The PNC Financial Services Group, Inc.
    Inventor: Michael Nitsopoulos
  • Patent number: 11755360
    Abstract: A computer-implemented method for detecting bottlenecks in microservice cloud systems is provided including identifying a plurality of nodes within one or more clusters associated with a plurality of containers, collecting thread profiles and network connectivity data by periodically dumping stacks of threads and identifying network connectivity status of one or more containers of the plurality of containers, classifying the stacks of threads based on a plurality of thread states, constructing a microservice dependency graph from the network connectivity data, aligning the plurality of nodes to bar graphs to depict an average number of working threads in a corresponding microservice, and generating, on a display, an illustration outlining the plurality of thread states, each of the plurality of thread states having a different representation.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: September 12, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Tatsushi Inagaki, Yohei Ueda, Tatsuhiro Chiba, Marcelo Carneiro Do Amaral, Sunyanan Choochotkaew, Qi Zhang
  • Patent number: 11748160
    Abstract: Load balancing processes are performed in an observability pipeline system comprising a plurality of computing resources. In some aspects, the observability pipeline system defines a leader role and worker roles. A plurality of computing jobs each include computing tasks associated with event data. The leader role dispatches the computing tasks to the worker roles according to a least in-flight task dispatch criteria, which includes iteratively: identifying an available worker role; identifying one or more incomplete computing jobs; selecting, from the one or more incomplete computing jobs, a computing job that has the least number of in-flight computing tasks currently being executed in the observability pipeline system; identifying a next computing task from the selected computing job; and dispatching the next computing task to the available worker role. The worker roles execute the computing tasks by applying an observability pipeline process to the event data associated with the respective computing task.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: September 5, 2023
    Assignee: Cribl, Inc.
    Inventors: Dritan Bitincka, Ledion Bitincka, Nicholas Robert Romito, Clint Sharp
  • Patent number: 11740793
    Abstract: A data storage system having non-volatile media, a buffer memory, a processing device, and a data pre-fetcher. The data pre-fetcher receives commands to be executed in the data storage system, provides the commands as input to a predictive model, obtains at least one command identified for pre-fetching, as output from the predictive model having the commands as input. Prior to the command being executed in the data storage device, the data pre-fetcher retrieves, from the non-volatile memory, at least a portion of data to be used in execution of the command; and stores the portion of data in the buffer memory. The retrieving and storing the portion of the data can be performed concurrently with the execution of many commands before the execution of the command, to reduce the latency impact of the command on other commands that are executed concurrently with the execution of the command.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: August 29, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Alex Frolikov, Zachary Andrew Pete Vogel, Joe Gil Mendes, Chandra Mouli Guda
  • Patent number: 11736413
    Abstract: Example methods and systems for a programmable virtual network interface controller (VNIC) to perform packet processing are described. In one example, the programmable VNIC may modify a packet processing pipeline based on the instruction. The modification may include injecting a second packet processing stage among the multiple first packet processing stages of the packet processing pipeline. In response to detecting an ingress packet that requires processing by the programmable VNIC, the ingress packet may be steered towards the modified packet processing pipeline. The ingress packet may then be processed using the modified packet processing pipeline by performing the second packet processing stage (a) to bypass at least one of the multiple first processing stages, or (b) in addition to the multiple first processing stages.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: August 22, 2023
    Assignee: VMWARE, INC.
    Inventors: Yong Wang, Boon Seong Ang, Wenyi Jiang, Guolin Yang
  • Patent number: 11734066
    Abstract: Generally discussed herein are devices, systems, and methods for scheduling tasks to be completed by resources. A method can include identifying features of the task, the features including a time-dependent feature and a time-independent feature, the time-dependent feature indicating a time the task is more likely to be successfully completed by the resource, converting the features to feature values based on a predefined mapping of features to feature values in a first memory device, determining, by a gradient boost tree model and based on a first current time and the feature values, a likelihood the resource will successfully complete the task, and scheduling the task to be performed by the resource based on the determined likelihood.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: August 22, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jinchao Li, Yu Wang, Karan Srivastava, Jianfeng Gao, Prabhdeep Singh, Haiyuan Cao, Xinying Song, Hui Su, Jaideep Sarkar
  • Patent number: 11734091
    Abstract: A remote procedure call channel for interprocess communication in a managed code environment ensures thread-affinity on both sides of an interprocess communication. Using the channel, calls from a first process to a second process are guaranteed to run on a same thread in a target process. Furthermore, calls from the second process back to the first process will also always execute on the same thread. An interprocess communication manager that allows thread affinity and reentrancy is able to correctly keep track of the logical thread of execution so calls are not blocked in unmanaged hosts. Furthermore, both unmanaged and managed hosts are able to make use of transparent remote call functionality provided by an interprocess communication manager for the managed code environment.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: August 22, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jackson M. Davis, John A. Shepard
  • Patent number: 11726936
    Abstract: A system can include a plurality of processors. Each processor of the plurality of processors can be configured to execute program code. The system can include a direct memory access system configured for multi-processor operation. The direct memory access system can include a plurality of data engines coupled to a plurality of interfaces via a plurality of switches. The plurality of switches can be programmable to couple different ones of the plurality of data engines to different ones of the plurality of processors for performing direct memory access operations based on a plurality of host profiles corresponding to the plurality of processors.
    Type: Grant
    Filed: December 3, 2021
    Date of Patent: August 15, 2023
    Inventors: Chandrasekhar S. Thyamagondlu, Darren Jue, Ravi Sunkavalli, Akhil Krishnan, Tao Yu, Kushagra Sharma
  • Patent number: 11720156
    Abstract: An electronic device includes a connection unit including a first terminal for receiving power from a power supply apparatus and a second terminal for receiving power supply capability of the power supply apparatus, a communication control unit that performs communication with the power supply apparatus via the second terminal, and a power control unit that performs a process for limiting power supplied from the power supply apparatus to a predetermined power or less in a case where the power supply capability is received from the power supply apparatus.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: August 8, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Yuki Tsujimoto, Hiroki Kitanosako, Masashi Yoshida
  • Patent number: 11709667
    Abstract: In a symmetric hardware accelerator system, an initial hardware accelerator is selected for an upgrade of firmware. The initial and other hardware accelerators handle workloads that have been balanced across the hardware accelerators. Workloads are rebalanced by directing workloads having low CPU utilization to the initial hardware accelerator. A CPU fallback is conducted of the workloads of the initial hardware accelerator to the CPU. While the CPU is handling the workloads, firmware of the initial hardware accelerator is upgraded.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: July 25, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Tao Chen, Yong Zou, Ran Liu
  • Patent number: 11709718
    Abstract: A barrier synchronization circuit that performs barrier synchronization of a plurality of processes executed in parallel by a plurality of processing circuits, the barrier synchronization circuit includes a first determination circuit configured to determine whether the number of first processing circuits among the plurality of the processing circuits is equal to or greater than a first threshold value, the first processing circuits having completed the process, and an instruction circuit configured to instruct a second processing circuit among the plurality of the processing circuits to forcibly stop the process when it is determined that the number is equal to or greater than the first threshold value by the first determination circuit, the second processing circuit having not completed the process.
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: July 25, 2023
    Assignee: FUJITSU LIMITED
    Inventors: Kanae Nakagawa, Masaki Arai, Yasumoto Tomita
  • Patent number: 11709467
    Abstract: A time optimal speed planning method and system based on constraint classification. The method comprises: reading path information and carrying out curve fitting to obtain a path curve; sampling the path curve, and considering static constraint to obtain a static upper bound value of a speed curve; considering dynamic constraint, and combining the static upper bound value of the speed curve to construct a time optimal speed model; carrying out convex transformation on the time optimal speed model to obtain a convex model; and solving the convex model based on a quadratic sequence planning method to obtain a final speed curve. The system comprises: a path curve module, a static constraint module, a dynamic constraint module, a model transformation module and a solving module.
    Type: Grant
    Filed: November 22, 2022
    Date of Patent: July 25, 2023
    Assignee: GUANGDONG UNIVERSITY OF TECHNOLOGY
    Inventors: Jian Gao, Guixin Zhang, Lanyu Zhang, Haixiang Deng, Yun Chen, Yunbo He, Xin Chen
  • Patent number: 11704429
    Abstract: An information computer system is provided for securely releasing time-sensitive information to recipients via a blockchain. A submitter submits a document to the system and a blockchain transaction is generated and submitted to the blockchain based on the document (e.g., the document is included as part of the blockchain transaction). An editor may edit the document and an approver may approve the document for release to the recipients. Each modification and/or approval of the document is recorded as a separate transaction on the blockchain where each of the submitter, editor, approver, and recipients interact with the blockchain with corresponding unique digital identifiers—such as private keys.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: July 18, 2023
    Assignee: NASDAQ, INC.
    Inventors: Akbar Ansari, Thomas Fay, Dominick Paniscotti
  • Patent number: 11704153
    Abstract: A system for storing and extracting elements according to their priority takes into account not only the priorities of the elements but also three additional parameters, namely, a priority resolution p? and two priority limits pmin and pmax. By allowing an ordering error if the difference in the priorities of elements are within the priority resolution, an improvement in performance is achieved.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: July 18, 2023
    Assignee: Reservoir Labs, Inc.
    Inventor: Jordi Ros-Giralt
  • Patent number: 11704249
    Abstract: Aspects of a storage device including a memory and a controller are provided. The controller may receive a prefetch request to retrieve data for a host having a promoted stream. The controller may access a frozen time table indicating hosts for which data has been prefetched and frozen times associated with the host and other hosts. The controller can determine whether the host has a higher priority over other hosts included in the frozen time table based on corresponding frozen times and data access parameters associated with the host. The controller may determine to prefetch the data for the host in response to the prefetch request when the host has a higher priority than the other hosts. The controller can receive a host read command associated with the promoted stream from the host and provide the prefetched data to the host in response to the host read command.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: July 18, 2023
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Adarsh Sreedhar, Ramanathan Muthiah
  • Patent number: 11693697
    Abstract: A computer-implemented method, a computer program product, and a computer system for optimizing workload placements in a system of multiple platforms as a service. A computer first places respective workloads on respective platforms that yield lowest costs for the respective workloads. The computer determines whether mandatory constraints are satisfied. The computer checks best effort constraints, in response to the mandatory constraints being satisfied. The computer determines a set of workloads for which the best effort constraints are not satisfied and determines a set of candidate platforms that yield the lowest costs and enable the best effort constraints to be satisfied. From the set of workloads, the computer selects a workload that has a lowest upgraded cost and updates the workload by setting an upgraded platform index.
    Type: Grant
    Filed: December 6, 2020
    Date of Patent: July 4, 2023
    Assignee: International Business Machines Corporation
    Inventor: Lior Aronovich
  • Patent number: 11693668
    Abstract: A parallel processing apparatus includes a plurality of compute nodes, and a job management device that allocates computational resources of the plurality of compute nodes to jobs, the job management device including circuitry configured to determine a resource search time range based on respective scheduled execution time periods of a plurality of jobs including a job being executed and a job waiting for execution, and search for free computational resources to be allocated to a job waiting for execution that is a processing target among the plurality of jobs, from among computational resources of the plurality of compute nodes within the resource search time range, by backfill scheduling.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: July 4, 2023
    Assignee: FUJITSU LIMITED
    Inventor: Akitaka Iwata
  • Patent number: 11687055
    Abstract: The present disclosure is intended to enable a user to grasp a state of load on an arithmetic processing unit (100, 200) so that the user can stop an excessive function of the arithmetic processing unit (100, 200), or can transfer part of arithmetic processes to another arithmetic processing unit (100, 200) with a small load. Included are the arithmetic processing unit (100, 200) that executes a plurality of processes related to servo control processing; and an observation unit (300) that determines at least one of point-of-time information about start of each of the processes executed by the arithmetic processing unit or point-of-time information about end of each of the processes executed by the arithmetic processing unit; and an output unit (400) that calculates information about usage of the arithmetic processing unit based on the point-of-time information determined by the observation unit, and outputs the calculated information.
    Type: Grant
    Filed: August 13, 2020
    Date of Patent: June 27, 2023
    Assignee: FANUC CORPORATION
    Inventors: Wei Luo, Satoshi Ikai, Tsutomu Nakamura
  • Patent number: 11687364
    Abstract: An apparatus is configured to collect information related to a first activity and analyze the collected information to determine decision data. The information is stored in a first list of the source processing core for scheduling execution of the activity by a destination processing core to avoid cache misses. The source processing core is configured to transmit information related to the decision data using an interrupt, to a second list associated with a scheduler of the destination processing core, if the destination processing core is currently executing a second activity having a lower priority than the first activity.
    Type: Grant
    Filed: July 16, 2020
    Date of Patent: June 27, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Raju Udava Siddappa, Chandan Kumar, Kamal Kishore, Tushar Vrind, Venkata Raju Indukuri, Balaji Somu Kandasamy
  • Patent number: 11681546
    Abstract: Methods and apparatuses are provided for data processing. The method includes receiving a first data packet and a second data packet; associating first codes with the first data packet and second codes with the second data packet to generate a combined data packet after receiving the first data packet and the second data packet, wherein the first codes and the second codes specify processing to be performed to the a combined data packet; generating the combined data packet comprising the first data packet and the second data packet in response to determining that the first data packet and the second data packet are correlated; and performing the processing to the combined data packet in accordance with the first codes or the second codes.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: June 20, 2023
    Assignee: Dongfang Jingyuan Electron Limited
    Inventors: Zhaoli Zhang, Weimin Ma, Naihong Tang
  • Patent number: 11677681
    Abstract: Systems and methods for allocating computing resources within a distributed computing system are disclosed. Computing resources such as CPUs, GPUs, network cards, and memory are allocated to jobs submitted to the system by a scheduler. System configuration and interconnectivity information is gathered by a mapper and used to create a graph. Resource allocation is optimized based on one or more quality of service (QoS) levels determined for the job. Job performance characterization, affinity models, computer resource power consumption, and policies may also be used to optimize the allocation of computing resources.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: June 13, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Max Alt, Paulo Roberto Pereira de Souza filho
  • Patent number: 11662961
    Abstract: Delivery form information on a print product is acquired, position information indicating a position of quality inspection on the print product is generated in accordance with the acquired delivery form information, and quality report data including the position information is generated.
    Type: Grant
    Filed: February 4, 2022
    Date of Patent: May 30, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yoshiji Kanamoto, Toshihiko Iida, Kimio Hayashi
  • Patent number: 11663012
    Abstract: Disclosed herein are systems and method for detecting coroutines. A method may include: identifying an application running on a computing device, wherein the application includes a plurality of coroutines; determining an address of a common entry point for coroutines, wherein the common entry point is found in a memory of the application; identifying, using an injected code, at least one stack trace entry for the common entry point; detecting coroutine context data based on the at least one stack trace entry; adding an identifier of a coroutine associated with the coroutine context data to a list of detected coroutines; and storing the list of detected coroutines in target process memory associated with the application.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: May 30, 2023
    Assignee: Cloud Linux Software Inc.
    Inventors: Igor Seletskiy, Pavel Boldin
  • Patent number: 11663044
    Abstract: The invention relates to an apparatus for second offloads in a graphics processing unit (GPU). The apparatus includes an engine; and a compute unit (CU). The engine is arranged operably to store an operation table including entries. The CU is arranged operably to fetch computation codes including execution codes, and synchronization requests; execute each execution code; and send requests to the engine in accordance with the synchronization requests for instructing the engine to allow components inside or outside of the GPU to complete operations in accordance with the entries of the operation table.
    Type: Grant
    Filed: July 2, 2021
    Date of Patent: May 30, 2023
    Assignee: Shanghai Biren Technology Co., Ltd
    Inventors: HaiChuan Wang, Song Zhao, GuoFang Jiao, ChengPing Luo, Zhou Hong
  • Patent number: 11664025
    Abstract: The present disclosure is generally directed to the generation of voice-activated data flows in interconnected network. The voice-activated data flows can include input audio signals that include a request and are detected at a client device. The client device can transmit the input audio signal to a data processing system, where the input audio signal can be parsed and passed to the data processing system of a service provider to fulfill the request in the input audio signal. The present solution is configured to conserve network resources by reducing the number of network transmissions needed to fulfill a request.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: May 30, 2023
    Assignee: GOOGLE LLC
    Inventors: Gaurav Bhaya, Ulas Kirazci, Bradley Abrams, Adam Coimbra, Ilya Firman, Carey Radebaugh
  • Patent number: 11657001
    Abstract: A management technology for mapping data of a non-volatile memory is shown. A controller establishes a first mapping table and a second mapping table. By looking up the first mapping table, the controller maps a first logical address issued by the host for data reading to a first block substitute. By looking up the second mapping table, the controller maps the first block substitute to a first physical block of the non-volatile memory. The first mapping table further records a first offset for the first logical address. According to the first offset recorded in the first mapping table, the first logical address is mapped to a first data management unit having the first offset in the first physical block represented by the first block substitute.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: May 23, 2023
    Assignee: SILICON MOTION, INC.
    Inventor: Sheng-Hsun Lin
  • Patent number: 11654552
    Abstract: Provided are systems and methods for training a robot. The method commences with collecting, by the robot, sensor data from a plurality of sensors of the robot. The sensor data may be related to a task being performed by the robot based on an artificial intelligence (AI) model. The method may further include determining, based on the sensor data and the AI model, that a probability of completing the task is below a threshold. The method may continue with sending a request for operator assistance to a remote computing device and receiving, in response to sending the request, teleoperation data from the remote computing device. The method may further include causing the robot to execute the task based on the teleoperation data. The method may continue with generating training data based on the sensor data and results of execution of the task for updating the AI model.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: May 23, 2023
    Assignee: TruPhysics GmbH
    Inventor: Albert Groz
  • Patent number: 11656910
    Abstract: The disclosure provides a task segmentation device and method, a task processing device and method, a multi-core processor. The task segmentation device includes a granularity task segmentation unit configured to segment a task by adopting at least one granularity to form subtasks, and a task segmentation granularity selection unit configured to select the granularity to be adopted.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: May 23, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Tianshi Chen, Shengyuan Zhou, Shaoli Liu
  • Patent number: 11656675
    Abstract: A method of operating an application processor including a central processing unit (CPU) with at least one core and a memory interface includes measuring, during a first period, a core active cycle of a period in which the at least one core performs an operation to execute instructions and a core idle cycle of a period in which the at least one core is in an idle state, generating information about a memory access stall cycle of a period in which the at least one core accesses the memory interface in the core active cycle, correcting the core active cycle using the information about the memory access stall cycle to calculate a load on the at least one core using the corrected core active cycle, and performing a DVFS operation on the at least one core using the calculated load on the at least one core.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: May 23, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Seok-Ju Yoon, Nak-Woo Sung, Seung-Chull Suh, Taek-Ki Kim, Jae-Joon Yoo, Eun-Ok Jo
  • Patent number: 11658920
    Abstract: Embodiments are described for an autonomously and dynamically allocating resources in a distributed network based on forecasted a-priori CPU resource utilization, rather than a manual throttle setting. A multivariate (CPU idle %, disk I/O, network and memory) rather than single variable approach for Probabilistic Weighted Fuzzy Time Series (PWFTS) is used for forecasting compute resources. The dynamic throttling is combined with an adaptive compute change rate detection and correction. A single spike detection and removal mechanism is used to prevent the application of too many frequent throttling changes. Such a method can be implemented for several use cases including, but not limited to: cloud data migration, replication to a storage server, system upgrades, bandwidth throttling in storage networks, and garbage collection.
    Type: Grant
    Filed: May 5, 2021
    Date of Patent: May 23, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Rahul Deo Vishwakarma, Jayanth Kumar Reddy Perneti, Gopal Singh
  • Patent number: 11650846
    Abstract: The present disclosure relates to a method, device and computer program product for processing a job. In a method, a first group of tasks in a first portion of a job are obtained based on a job description of the job from a client. The first group of tasks are allocated to a first group of processing devices in a distributed processing system, respectively, so that the first group of processing devices generate a first group of task results of the first group of tasks, respectively, the first group of processing devices being located in a first processing system based on a cloud and a second processing system based on blockchain. The first group of task results of the first group of tasks are received from the first group of processing devices, respectively. A job result of the job is generated at least partly based on the first group of task results.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: May 16, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Pengfei Wu, YuHong Nie, Jinpeng Liu
  • Patent number: 11645124
    Abstract: To be capable of concurrent execution of a function group not in data conflict by a plurality of cores and to execute a function pair in data conflict in a temporal separation manner. A process barrier 20 includes N?1 checker functions 22 and one limiter function 23, where the number of cores capable of concurrently executing the functions is N (N is an integer equal to or greater than 2), the checker functions 22 determine whether the head entry of a lock-free function queue LFQ1 is either the checker function 22 or the limiter function 23, and repeats reading of the head entry of the lock-free function queue LFQ1 if either, and ends processing if neither, and the limiter function 23 is an empty function ending without performing any processing.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: May 9, 2023
    Assignee: Hitachi Astemo, Ltd.
    Inventors: Masataka Nishi, Tomohito Ebina, Kazuyoshi Serizawa
  • Patent number: 11645125
    Abstract: A method and device for executing a workflow includes functions written in a heterogeneous programming language. The method for executing heterogeneous language functions includes obtaining a workflow that includes a call for a first function written in a first programming language and a call for a second function written in a second programming language, wherein input data of the second function includes output data of the first function, and setting, in response to completing execution of the first function, the output data of the first function in a format capable of being processed in the second programming language.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: May 9, 2023
    Assignee: SAMSUNG SDS CO., LTD.
    Inventor: Joon Hyung Lee
  • Patent number: 11645226
    Abstract: Embodiments are directed to a processor having a functional slice architecture. The processor is divided into tiles (or functional units) organized into a plurality of functional slices. The functional slices are configured to perform specific operations within the processor, which includes memory slices for storing operand data and arithmetic logic slices for performing operations on received operand data (e.g., vector processing, matrix manipulation). The processor includes a plurality of functional slices of a module type, each functional slice having a plurality of tiles. The processor further includes a plurality of data transport lanes for transporting data in a direction indicated in a corresponding instruction. The processor also includes a plurality of instruction queues, each instruction queue associated with a corresponding functional slice of the plurality of functional slices, wherein the instructions in the instruction queues comprise a functional slice specific operation code.
    Type: Grant
    Filed: March 17, 2022
    Date of Patent: May 9, 2023
    Assignee: Groq, Inc.
    Inventors: Dennis Charles Abts, Jonathan Alexander Ross, John Thompson, Gregory Michael Thorson
  • Patent number: 11642783
    Abstract: Methods, systems and computer program products for providing automated robotic process automation design generation are provided. Aspects include receiving a set of stored automated sub-processes. Aspects also include receiving a process description document that describes a desired computer automated task (CAT) process. Aspects also include identifying a plurality of candidate sub-processes based on the process description document. Aspects also include identifying at least one matching sub-process based on a comparison of each of the plurality of candidate sub-processes to the set of stored automated sub-processes. The at least one matching sub-process is a stored automated sub-process that exceeds a threshold level of similarity to a candidate sub-process. Aspects also include generating a recommendation to automatically populate a portion of a CAT design file for the desired CAT process based on the at least one matching sub-process.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: May 9, 2023
    Assignee: International Business Machines Corporation
    Inventors: Xue Han, Ya Bin Dang, Lijun Mei, Zi Ming Huang
  • Patent number: 11645213
    Abstract: A data processing system includes a memory system including a memory device storing data and a controller performing a data program operation or a data read operation with the memory device, and a host suitable for requesting the data program operation or the data read operation from the memory system. The controller can perform a serial communication to control a memory which is arranged outside the memory system and engaged with the host.
    Type: Grant
    Filed: April 4, 2022
    Date of Patent: May 9, 2023
    Assignee: SK hynix Inc.
    Inventor: Jong-Min Lee
  • Patent number: 11640320
    Abstract: Embodiments identify heap-hoarding stack traces to optimize memory efficiency. Some embodiments can determine a length of time when heap usage by processes exceeds a threshold. Some embodiments may then determine heap information of the processes for the length of time, where the heap information comprise heap usage information for each interval in the length of time. Next, some embodiments can determine thread information of the one or more processes for the length of time, wherein determining the thread information comprises determining classes of threads and wherein the thread information comprises, for each of the classes of threads, thread intensity information for each of the intervals. Some embodiments may then correlate the heap information with the thread information to identify code that correspond to the heap usage exceeding the threshold. Some embodiments may then initiate actions associated with the code.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: May 2, 2023
    Assignee: Oracle International Corporation
    Inventor: Eric S. Chan
  • Patent number: 11640647
    Abstract: The present disclosure relates to methods and devices for graphics processing including an apparatus, e.g., a GPU. The apparatus may determine whether to divide a group of threads into a plurality of sub-groups of threads, each thread of the group of threads being associated with a shader program. The apparatus may also divide, upon determining to divide the group of threads into the plurality of sub-groups of threads, the group of threads into the plurality of sub-groups of threads. Additionally, the apparatus may execute, upon dividing the group of threads into the plurality of sub-groups of threads, a subsection of the shader program for each sub-group of threads of the plurality of sub-groups of threads.
    Type: Grant
    Filed: March 3, 2021
    Date of Patent: May 2, 2023
    Assignee: QUALCOMM Incorporated
    Inventor: Andrew Evan Gruber
  • Patent number: 11625592
    Abstract: Systems, apparatus, and methods for thread-based scheduling within a multicore processor. Neural networking uses a network of connected nodes (aka neurons) to loosely model the neuro-biological functionality found in the human brain. Various embodiments of the present disclosure use thread dependency graphs analysis to decouple scheduling across many distributed cores. Rather than using thread dependency graphs to generate a sequential ordering for a centralized scheduler, the individual thread dependencies define a count value for each thread at compile-time. Threads and their thread dependency count are distributed to each core at run-time. Thereafter, each core can dynamically determine which threads to execute based on fulfilled thread dependencies without requiring a centralized scheduler.
    Type: Grant
    Filed: July 5, 2021
    Date of Patent: April 11, 2023
    Assignee: Femtosense, Inc.
    Inventors: Sam Brian Fok, Alexander Smith Neckar
  • Patent number: 11620158
    Abstract: A master-slave scheduling system, comprising (a) a master DRL unit comprising: (i) a queue containing a plurality of item-representations; (ii) a master policy module configured to select a single item-representation from the queue and submit to the slave unit; (iii) a master DRL agent configured to (a) train the master policy module; and (b) receive an updated item-representation from the slave unit, and update the queue; (b) The slave DRL unit comprising: (i) a slave policy module receiving a single item-representation, selecting a single task entry and submitting to a slave environment for performance; (ii) a slave DRL agent configured to: (a) train the slave policy module; (b) receive an item-representation from the master DRL unit, and submit to the slave policy module; (c) receive an updated item-representation from the slave's environment, and submit the same to the master DRL unit; and (iii) the slave DRL agent.
    Type: Grant
    Filed: January 14, 2021
    Date of Patent: April 4, 2023
    Assignee: B.G. NEGEV TECHNOLOGIES & APPLICATIONS LTD. AT BEN-GURION UNIVERSITY
    Inventors: Gilad Katz, Asaf Shabtai, Yoni Birman, Ziv Ido
  • Patent number: 11600124
    Abstract: In one aspect, a server computer is configured to facilitate operation of a movable barrier operator using voice commands. The server computer includes a processor configured to receive a request for a first rolling voice identifier from a communication apparatus in response to receipt of a voice command of a user requesting a state change of a movable barrier. The processor is configured to send the first rolling voice identifier to the communication apparatus causing the communication apparatus to provide a first physical stimulus to the user based at least in part on the first rolling voice identifier and user account information. The processor is configured to receive a communication indicative of a user voice response to the first physical stimulus and determine whether to instruct the movable barrier operator to change a state of the movable barrier based on the user voice response to the first physical stimulus.
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: March 7, 2023
    Assignee: The Chamberlain Group LLC
    Inventors: Casparus Cate, James J. Fitzgibbon, Martin B. Heckmann, James J. Johnson, David R. Morris, Cory Jon Sorice
  • Patent number: 11592861
    Abstract: A semiconductor device includes an intellectual property (IP) block, a clock management unit, a critical path monitor (CPM), and a CPM clock manager included in the clock management unit. The clock management unit is configured to receive a clock request signal, indicating whether the IP block requires a clock signal, from the IP block and perform clock gating for the IP block based on the received clock request signal. The CPM is configured to monitor the clock signal provided to the IP block to adjust at least one of a frequency of the clock signal provided to the IP block and a voltage supplied to the IP block. The CPM clock manager is configured to perform the clock gating for the CPM.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: February 28, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jae Gon Lee, Jae Young Lee, Se Hun Kim
  • Patent number: 11593163
    Abstract: The embodiments disclosed herein relate to using machine learning to allocate a number of concurrent processes for minimizing the completion time for executing a task having multiple subtasks. Historical data comprising a variety of subtask types with actual completion times is mined to create a set of statistical models for predicting completion time for a type of subtask. To minimize the total time to complete execution of a new task, a certain number of threads is allocated to execute subtasks of the new task. The certain number of threads is determined based on the predicted completion time for the subtasks using the respective statistical model. Threads are assigned to subtasks based on the predicted completion time for the subtasks, and the subtasks assigned to each thread are scheduled for execution.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: February 28, 2023
    Assignee: Oracle International Corporation
    Inventors: Subramanian Chittoor Venkataraman, Balender Kumar, Sai Krishna Sujith Alamuri, Murali Krishna Redrowthu, Srividya Bhavani Sivaraman