Process Scheduling Patents (Class 718/102)
  • Patent number: 12164952
    Abstract: An apparatus to facilitate barrier state save and restore for preemption in a graphics environment is disclosed. The apparatus includes processing resources to execute a plurality of execution threads that are comprised in a thread group (TG) and mid-thread preemption barrier save and restore hardware circuitry to: initiate an exception handling routine in response to a mid-thread preemption event, the exception handling routine to cause a barrier signaling event to be issued; receive indication of a valid designated thread status for a thread of a thread group (TG) in response to the barrier signaling event; and in response to receiving the indication of the valid designated thread status for the thread of the TG, cause, by the thread of the TG having the valid designated thread status, a barrier save routine and a barrier restore routine to be initiated for named barriers of the TG.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: December 10, 2024
    Assignee: INTEL CORPORATION
    Inventors: Vasanth Ranganathan, James Valerio, Joydeep Ray, Abhishek R. Appu, Alan Curtis, Prathamesh Raghunath Shinde, Brandon Fliflet, Ben J. Ashbaugh, John Wiegert
  • Patent number: 12158812
    Abstract: An example system can include: at least one processor; and non-transitory computer-readable storage storing instructions that, when executed by the at least one processor, cause the system to: generate an ingestion manager programmed to ingest data associated with a job; and generate a logging manager programmed to capture metadata associated with the job; wherein the ingestion manager is programmed to automatically retry the job based upon the metadata captured by the logging manager.
    Type: Grant
    Filed: May 13, 2022
    Date of Patent: December 3, 2024
    Assignee: Wells Fargo Bank, N.A.
    Inventors: Jashua Thejas Arul Dhas, Ganesh Kumar, Marimuthu Muthan, Aditya Kulkarni, Sai Raghavendra Neralla, Anshul Chauhan
  • Patent number: 12153530
    Abstract: A data processing system includes a memory system including a memory device storing data and a controller performing a data program operation or a data read operation with the memory device, and a host suitable for requesting the data program operation or the data read operation from the memory system. The controller can perform a serial communication to control a memory which is arranged outside the memory system and engaged with the host.
    Type: Grant
    Filed: April 11, 2023
    Date of Patent: November 26, 2024
    Assignee: SK hynix Inc.
    Inventor: Jong-Min Lee
  • Patent number: 12153959
    Abstract: A method for detecting a traffic ramp-up rule violation includes receiving data element retrieval requests from an information retrieval system and determining a requests per second (RPS) for a key range. The method also includes determining a moving average of RPS for the key range. The method also includes determining a number of delta violations, each delta violation comprising a respective beginning instance in time when the RPS exceeded a delta RPS limit. For each delta violation, the method includes determining a maximum conforming load for the key range over and determining whether the RPS exceeded the maximum conforming load for the key range based on the beginning instance in time of the respective delta violation. When the RPS has exceeded the maximum conforming load, the method includes determining that the delta violation corresponds to a full-history violation indicative of a degradation of performance of the information retrieval system.
    Type: Grant
    Filed: October 25, 2022
    Date of Patent: November 26, 2024
    Assignee: Google LLC
    Inventors: Arash Parsa, Joshua Melcon, David Gay, Ryan Huebsch
  • Patent number: 12153915
    Abstract: A method performed by a processing system including at least one processor includes applying a contextual filter to mask a portion of at least one of: an input of a software application, an output of the software application, or an underlying dataset of the software application, where the contextual filter simulates a limitation of a user of the software application, executing the software application with the contextual filter applied to the at least one of: the input of the software application, the output of the software application, or the underlying dataset of the software application, collecting ambient data during the executing, and recommending, based on a result of the executing, a modification to the software application to improve at least one of: an accessibility of the software application or an inclusion of the software application.
    Type: Grant
    Filed: September 19, 2022
    Date of Patent: November 26, 2024
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Yaron Kanza, Balachander Krishnamurthy, Divesh Srivastava
  • Patent number: 12135984
    Abstract: The exemplary embodiments may provide an application management method and apparatus, and a device, to unfreeze some processes in an application. The method includes: obtaining an unfreezing event, where the unfreezing event includes process information, and the unfreezing event is used to trigger an unfreezing operation to be performed on some processes in a frozen application; and performing an unfreezing operation on the some processes based on the process information.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: November 5, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Huifeng Hu, Xiaojun Duan
  • Patent number: 12135731
    Abstract: In some implementations, a monitoring device may obtain information related to one or more extract, transform, and load (ETL) jobs scheduled in an ETL system. The monitoring device may generate ETL job metrics that include status information, timing information, and data volume information associated with one or more constituent tasks associated with the one or more ETL jobs, wherein the ETL job metrics include metrics related to extracting data records from a data source, transforming the data records into a target format, and/or loading the data records in the target format into a data sink. The monitoring device may enable capabilities to create or interact with one or more dashboards to visualize the ETL job metrics via a workspace accessible to one or more client devices. The monitoring device may invoke a messaging service to publish one or more notifications associated with the ETL job metrics via the workspace.
    Type: Grant
    Filed: January 13, 2021
    Date of Patent: November 5, 2024
    Assignee: Capital One Services, LLC
    Inventors: Alex Makumbi, Andrew Stevens
  • Patent number: 12117882
    Abstract: A system having: a processor, wherein the processor is configured for executing a process of reducing power consumption that includes executing a first task over a first plurality of timeslots and a second task over a second plurality of timeslots, and wherein the processor is configured to: execute a real-time operating system (RTOS) process; determine that the first task is complete during a first timeslot of the first plurality of timeslots; and enter a low power mode for a reminder of the first timeslot upon determining that there is enough time to enter a low power mode during the first timeslot and a next timeslot is allocated to the first task, otherwise perform a dead-wait for the reminder of the first timeslot.
    Type: Grant
    Filed: March 15, 2023
    Date of Patent: October 15, 2024
    Assignee: HAMILTON SUNDSTRAND CORPORATION
    Inventor: Balaji Krishnakumar
  • Patent number: 12118057
    Abstract: A computing device, including a hardware accelerator configured to receive a first matrix and receive a second matrix. The hardware accelerator may, for a plurality of partial matrix regions, in a first iteration, read a first submatrix of the first matrix and a second submatrix of the second matrix into a front-end processing area. The hardware accelerator may multiply the first submatrix by the second submatrix to compute a first intermediate partial matrix. In each of one or more subsequent iterations, the hardware accelerator may read an additional submatrix into the front end processing area. The hardware accelerator may compute an additional intermediate partial matrix as a product of the additional submatrix and a submatrix reused from an immediately prior iteration. The hardware accelerator may compute each partial matrix as a sum of two or more of the intermediate partial matrices and may output the plurality of partial matrices.
    Type: Grant
    Filed: January 14, 2021
    Date of Patent: October 15, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Derek Edward Davout Gladding, Nitin Naresh Garegrat, Timothy Hume Heil, Balamurugan Kulanthivelu Veluchamy
  • Patent number: 12111674
    Abstract: An operating method of a system-on-chip (SoC) which includes a processor including a first core and a dynamic voltage and frequency scaling (DVFS) module and a clock management unit (CMU) for supplying an operating clock to the first core, the operating method including: obtaining a required performance of the first core; finding available frequencies meeting the required performance; obtaining information for calculating energy consumption for each of the available frequencies; calculating the energy consumption for each of the available frequencies, based on the information; determining a frequency, which causes minimum energy consumption, from among the available frequencies as an optimal frequency; and adjusting an operating frequency to be supplied to the first core to the optimal frequency.
    Type: Grant
    Filed: April 14, 2022
    Date of Patent: October 8, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Choonghoon Park, Jong-Lae Park, Bumgyu Park, Youngtae Lee, Donghee Han
  • Patent number: 12112156
    Abstract: A software update system according to one embodiment of the present disclosure is configured to update software used in a vehicle based on update data of the software, the update data being transmitted to the vehicle from an external device that is communicably connected to the vehicle. The software update system includes: a software update unit configured to update the software based on the update data; a vehicle data acquisition unit configured to acquire respective pieces of second vehicle data about states of the vehicle before and after the software update by the update unit; and an effect evaluation unit configured to evaluate an effect of the software update based on the respective pieces of second vehicle data before and after the software update.
    Type: Grant
    Filed: May 10, 2022
    Date of Patent: October 8, 2024
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Masafumi Yamamoto, Atsushi Tabata, Koichi Okuda, Yuki Makino
  • Patent number: 12105607
    Abstract: Techniques are described for a data recovery validation test. In examples, a processor receives a command to be included in the validation test that is configured to validate performance of an activity by a server prior to a failure to perform the activity by the server. The processor stores the validation test including the command on a memory device, and prior to the failure of the activity by the server, executes the validation test including the command responsive to an input. The processor receives results of the validation test corresponding to the command and indicating whether the server performed the activity in accordance with a standard for the activity during the validation test. The processor provides the results of the validation test in a user interface.
    Type: Grant
    Filed: November 30, 2022
    Date of Patent: October 1, 2024
    Assignee: State Farm Mutual Automobile Insurance Company
    Inventors: Victoria Michelle Passmore, Cesar Bryan Acosta, Christopher Chickoree, Mason Davenport, Ashish Desai, Sudha Kalyanasundaram, Christopher R. Lay, Emre Ozgener, Steven Stiles, Andrew Warner
  • Patent number: 12106152
    Abstract: A cloud service system and an operation method thereof are provided. The cloud service system includes a first computing resource pool, a second computing resource pool, and a task dispatch server. Each computing platform in the first computing resource pool does not have a co-processor. Each computing platform in the second computing resource pool has at least one co-processor. The task dispatch server is configured to receive a plurality of tasks. The task dispatch server checks a task attribute of a task to be dispatched currently among the tacks. The task dispatch server chooses to dispatch the task to be dispatched currently to the first computing resource pool or to the second computing resource pool for execution according to the task attribute.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: October 1, 2024
    Assignee: Shanghai Biren Technology Co., Ltd
    Inventor: Xin Wang
  • Patent number: 12099453
    Abstract: Embodiments of the present disclosure relate to application partitioning for locality in a stacked memory system. In an embodiment, one or more memory dies are stacked on the processor die. The processor die includes multiple processing tiles and each memory die includes multiple memory tiles. Vertically aligned memory tiles are directly coupled to and comprise the local memory block for a corresponding processing tile. An application program that operates on dense multi-dimensional arrays (matrices) may partition the dense arrays into sub-arrays associated with program tiles. Each program tile is executed by a processing tile using the processing tile's local memory block to process the associated sub-array. Data associated with each sub-array is stored in a local memory block and the processing tile corresponding to the local memory block executes the program tile to process the sub-array data.
    Type: Grant
    Filed: March 30, 2022
    Date of Patent: September 24, 2024
    Assignee: NVIDIA Corporation
    Inventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor
  • Patent number: 12099869
    Abstract: A scheduler, a method of operating the scheduler, and an electronic device including the scheduler are disclosed. The method of operating the scheduler configured to determine a model to be executed in an accelerator includes receiving one or more requests for execution of a plurality of models to be independently executed in the accelerator, and performing layer-wise scheduling on the models based on an idle time occurring when a candidate layer which is a target for the scheduling in each of the models is executed in the accelerator.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: September 24, 2024
    Assignees: Samsung Electronics Co., Ltd., SNU R&DB FOUNDATION
    Inventors: Seung Wook Lee, Younghwan Oh, Jaewook Lee, Sam Son, Yunho Jin, Taejun Ham
  • Patent number: 12099863
    Abstract: Aspects include providing isolation between a plurality of containers in a pod that are each executing on a different virtual machine (VM) on a host computer. Providing the isolation includes converting a data packet into a serial format for communicating with the host computer. The converted data packet is sent to a router executing on the host computer. The router determines a destination container in the plurality of containers based at least in part on content of the converted data packet and routes the converted data packet to the destination container.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: September 24, 2024
    Assignee: International Business Machines Corporation
    Inventors: Qi Feng Huo, Wen Yi Gao, Si Bo Niu, Sen Wang
  • Patent number: 12099841
    Abstract: An embodiment of an apparatus comprises decode circuitry to decode a single instruction, the single instruction to include a field for an identifier of a first source operand, a field for an identifier of a destination operand, and a field for an opcode, the opcode to indicate execution circuitry is to program a user timer, and execution circuitry to execute the decoded instruction according to the opcode to retrieve timer program information from a location indicated by the first source operand, and program a user timer indicated by the destination operand based on the retrieved timer program information. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: September 24, 2024
    Assignee: Intel Corporation
    Inventors: Rajesh Sankaran, Gilbert Neiger, Vedvyas Shanbhogue, David Koufaty
  • Patent number: 12093721
    Abstract: Provided are a method for processing data, an electronic device and a storage medium, which relate to the field of deep learning and data processing. The method may include: multiple target operators of a target model are acquired; the multiple target operators are divided into at least one operator group, according to an operation sequence of each of the multiple target operators in the target model, wherein at least one target operator in each of the at least one operator group is operated by the same processor and is operated within the same target operation period; and the at least one operator group is output.
    Type: Grant
    Filed: September 12, 2022
    Date of Patent: September 17, 2024
    Assignee: Beijing Baidu Netcom Science Technology Co., Ltd.
    Inventors: Tianfei Wang, Buhe Han, Zhen Chen, Lei Wang
  • Patent number: 12073247
    Abstract: A method for scheduling tasks includes receiving input that was acquired using one or more data collection devices, and scheduling one or more input tasks on one or more computing resources of a network, predicting one or more first tasks based in part on the input, assigning one or more placeholder tasks for the one or more predicted first tasks to the one or more computing resources based in part on a topology of the network, receiving one or more updates including an attribute of the one or more first tasks to be executed as input tasks are executed, modifying the one or more placeholder tasks based on the attribute of the one or more first tasks to be executed, and scheduling the one or more first tasks on the one or more computing resources by matching the one or more first tasks to the one or more placeholder tasks.
    Type: Grant
    Filed: December 5, 2022
    Date of Patent: August 27, 2024
    Assignee: SCHLUMBERGER TECHNOLOGY CORPORATION
    Inventor: Marvin Decker
  • Patent number: 12066795
    Abstract: An input device includes a movable input surface protruding from an electronic device. The input device enables force inputs along three axes relative to the electronic device: first lateral movements, second lateral movements, and axial movements. The input device includes force or displacement sensors which can detect a direction and magnitude of input forces.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: August 20, 2024
    Assignee: Apple Inc.
    Inventors: Colin M. Ely, Erik G. de Jong, Steven P. Cardinali
  • Patent number: 12068935
    Abstract: There is provided the apparatus comprising: at least one processor; and at least one memory comprising computer code that, when executed by the at least one processor, causes the apparatus to: identify a potential problem in a network comprising at least one network automation function; signal an indication of said potential problem to at least one network automation function of said network and a request for a proposal to address said problem; receive at least one proposal in response to said signalling; determine policy changes for addressing said potential problem in dependence on said at least one proposal; and implement said policy changes.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: August 20, 2024
    Assignee: NOKIA SOLUTIONS AND NETWORKS OY
    Inventors: Stephen Mwanje, Darshan Ramesh
  • Patent number: 12061550
    Abstract: An apparatus is described. The apparatus includes a mass storage device processor that is to behave as an additional general purpose processing core of a computing system that a mass storage device having the mass storage device processor is to be coupled to, wherein, the mass storage device processor is to execute out of a component of main memory within the mass storage device.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: August 13, 2024
    Assignee: Intel Corporation
    Inventors: Frank T. Hady, Sanjeev N. Trika
  • Patent number: 12061932
    Abstract: An apparatus in an illustrative embodiment comprises at least one processing device that includes a processor coupled to a memory. The at least one processing device is configured to establish with a coordination service for one or more distributed applications a participant identifier for a given participant in a multi-leader election algorithm implemented in a distributed computing system comprising multiple compute nodes, the compute nodes corresponding to participants having respective participant identifiers, and to interact with the coordination service in performing an iteration of the multi-leader election algorithm to determine a current assignment of respective ones of the participants as leaders for respective processing tasks of the distributed computing system. In some embodiments, the at least one processing device comprises at least a portion of a particular one of the compute nodes of the distributed computing system, and the coordination service comprises one or more external servers.
    Type: Grant
    Filed: December 27, 2021
    Date of Patent: August 13, 2024
    Assignee: Dell Products L.P.
    Inventors: Pan Xiao, Xuhui Yang
  • Patent number: 12045659
    Abstract: An algorithm for efficiently maintaining a globally uniform-in-time execution schedule for a dynamically changing set of periodic workload instances is provided. At a high level, the algorithm operates by gradually adjusting execution start times in the schedule until they converge to a globally uniform state. In certain embodiments, the algorithm exhibits the property of “quick convergence,” which means that regardless of the number of periodic workload instances added or removed, the execution start times for all workload instances in the schedule will typically converge to a globally uniform state within a single cycle length from the time of the addition/removal event(s) (subject to a tunable “aggressiveness” parameter).
    Type: Grant
    Filed: July 12, 2021
    Date of Patent: July 23, 2024
    Assignee: VMware LLC
    Inventors: Danail Metodiev Grigorov, Nikolay Kolev Georgiev
  • Patent number: 12039366
    Abstract: This application discloses a task processing method, a system, a device, and a storage medium. The method includes: receiving a task published by a task publisher device and an electronic resource allocated for execution of the task; transmitting the task and the electronic resource to a blockchain network, to enable the blockchain network to construct a smart contract corresponding to the task and the electronic resource; transmitting the task to a task invitee device, to enable the task invitee device to execute the task; receiving an execution result corresponding to the task transmitted by a task invitee device after the task invitee device executes the task; and transmitting the execution result to the blockchain network, to enable the blockchain network to perform verification on the execution result according to the smart contract, and transfer the electronic resource to the task invitee device according to a verification result.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: July 16, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jingyu Yang, Maogang Ma, Guize Liu, Jinsong Ma
  • Patent number: 12032883
    Abstract: The subject matter of this specification can be implemented in, among other things, a method that includes accessing a plurality of target tasks for a computing system, the computing system comprising a plurality of resources, wherein the plurality of resources comprises a first server and a second server, accessing a plurality of configurations of the computing system, wherein each of the plurality of configurations identifies one or more resources of the plurality of resources to perform the respective target task of the plurality of target tasks, and performing, for each of the plurality of configurations, a simulation to determine a plurality of performance metrics, wherein each of the plurality of performance metrics predicts performance of at least one of the plurality of resources executing the plurality of target tasks on the computing system.
    Type: Grant
    Filed: June 13, 2023
    Date of Patent: July 9, 2024
    Assignee: Parallels International GmbH
    Inventors: Vasileios Koutsomanis, Igor Marnat, Nikolay Dobrovolskiy
  • Patent number: 12026518
    Abstract: An apparatus for parallel processing includes a memory and one or more processors, at least one of which operates a single instruction, multiple data (SIMD) model, and each of which are coupled to the memory. The processors are configured to process data samples associated with one or multiple chains or graphs of data processors, which chains or graphs describe processing steps to be executed repeatedly on data samples that are a subset of temporally ordered samples. The processors are additionally configured to dynamically schedule one or multiple sets of the samples associated with the one or multiple chains or graphs of data processors to reduce latency of processing of the data samples associated with a single chain or graph of data processors or different chains and graphs of data processors.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: July 2, 2024
    Assignee: BRAINGINES SA
    Inventors: Markus Steinberger, Alexander Talashov, Aleksandrs Procopcuks, Vasilii Sumatokhin
  • Patent number: 12026383
    Abstract: An aspect of the invention relates to a method of managing jobs in a information system (SI) on which a plurality of jobs run, the information system (SI) comprising a plurality of computer nodes (NDi) and at least a first storage tier (NS1) associated with a first performance tier and a second storage tier (NS2) associated with a second performance tier lower than the first performance tier, each job being associated with a priority level determined from a set of parameters comprising the node or nodes (NDi) on which the job is to be executed, the method comprising a step of scheduling the jobs as a function of the priority level associated with each job; the set of parameters used for determining the priority level also comprising a first parameter relating to the storage tier to be used for the data necessary for the execution of the job in question and a second parameter relating to the position of the data necessary for the execution of the job (TAi) in question.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: July 2, 2024
    Assignee: BULL SAS
    Inventor: Jean-Olivier Gerphagnon
  • Patent number: 12028269
    Abstract: There are provided a method and an apparatus for cloud management, which selects optimal resources based on graphic processing unit (GPU) resource analysis in a large-scale container platform environment. According to an embodiment, a GPU bottleneck phenomenon occurring in an application of a large-scale container environment may be reduced by processing partitioned allocation of GPU resources, rather than existing 1:1 allocation, through real-time GPU data analysis (application of a threshold) and synthetic analysis of GPU performance degrading factors.
    Type: Grant
    Filed: November 9, 2022
    Date of Patent: July 2, 2024
    Assignee: Korea Electronics Technology Institute
    Inventors: Jae Hoon An, Young Hwan Kim
  • Patent number: 12028210
    Abstract: Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: marking of a request to define a marked request that includes associated metadata, wherein the metadata specifies action for performing by a resource interface associated to a production environment resource of a production environment, wherein the resource interface is configured for emulating functionality of the production environment resource; and sending the marked request to the resource interface for performance of the action specified by the metadata.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: July 2, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Samir Nasser, Kyle Brown
  • Patent number: 12019760
    Abstract: An information handling system includes a first memory having a trusted memory region, wherein the trusted memory region is an area of execution that is protected from processes running in the information handling system outside the trusted memory region. A secure cryptographic module may receive a request to create the trusted memory region from a dependent application, and create a mapping of the trusted memory region along with an enhanced page cache address range mapped to a non-uniform memory access (NUMA) node. The module may also detect a NUMA migration event of the dependent application, identify the trusted memory region corresponding to the NUMA migration event, and migrate the trusted memory region from the NUMA node to another NUMA node.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: June 25, 2024
    Assignee: Dell Products L.P.
    Inventors: Vinod Parackal Saby, Krishnaprasad Koladi, Gobind Vijayakumar
  • Patent number: 12020188
    Abstract: A task management platform generates an interactive display tasks based on multi-team activity data of different geographic locations across a plurality of distributed guided user interfaces (GUIs). Additionally the task management platform uses a distributed machine-learning based system to determine a suggested task item for a remote team based on multi-team activity data of different geographic locations.
    Type: Grant
    Filed: December 5, 2022
    Date of Patent: June 25, 2024
    Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY
    Inventors: Michael Shawn Jacob, Manali Desai, Leah Garcia, Oscar Allan Arulfo
  • Patent number: 12008399
    Abstract: A method, system and computer program product for optimizing scheduling of batch jobs are disclosed. The method may include obtaining, by one or more processors, a set of batch jobs, connection relationships among batch jobs in the set of batch jobs, and a respective execution time of each batch job in the set of batch jobs. The method may also include generating, by the one or more processors, a directed weighted graph for the set of batch jobs, wherein in the directed weighted graph, a node represents a batch job, a directed edge between two nodes represents a directed connection between two corresponding batch jobs, a weight of a node represents the execution time of the batch job corresponding to the node. The method may also include obtaining, by one or more processors, information of consumption of same resource(s) among the batch jobs in the set of batch jobs.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: June 11, 2024
    Assignee: International Business Machines Corporation
    Inventors: Xi Bo Zhu, Shi Yu Wang, Xiao Xiao Pei, Qin Li, Lu Zhao
  • Patent number: 12008400
    Abstract: The disclosure relates to a method and a control server for scheduling a computing task including a plurality of tasks to be performed by computation servers.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: June 11, 2024
    Assignees: Samsung Electronics Co., Ltd., Korea University Research And Business Foundation
    Inventors: Kisuk Kweon, Haneul Ko, Sangheon Pack, Jaewook Lee, Joonwoo Kim, Yujin Tae
  • Patent number: 12001513
    Abstract: A method for implementing a self-optimized video analytics pipeline is presented. The method includes decoding video files into a sequence of frames, extracting features of objects from one or more frames of the sequence of frames of the video files, employing an adaptive resource allocation component based on reinforcement learning (RL) to dynamically balance resource usage of different microservices included in the video analytics pipeline, employing an adaptive microservice parameter tuning component to balance accuracy and performance of a microservice of the different microservices, applying a graph-based filter to minimize redundant computations across the one or more frames of the sequence of frames, and applying a deep-learning-based filter to remove unnecessary computations resulting from mismatches between the different microservices in the video analytics pipeline.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: June 4, 2024
    Assignee: NEC Corporation
    Inventors: Giuseppe Coviello, Yi Yang, Srimat Chakradhar
  • Patent number: 11989590
    Abstract: To provide a more efficient resource allocation method and system using a genetic algorithm (GA). The present technology includes a method for allocating resources to a production process including a plurality of processes, the method including allocating priorities to the plurality of processes, selecting processes executable at a first time among the plurality of processes and capable of allocating necessary resources, allocating the necessary resources to the selected processes in descending order of priorities, selecting processes executable at a second time that is later than the first time among the plurality of processes and capable of allocating necessary resources, and allocating the necessary resources to the selected processes in descending order of priorities. The present technology also includes, as a method of expressing genes of GA, not having direct allocation information for genes but having information (priority) for determining an order for allocation.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: May 21, 2024
    Assignee: SYNAPSE INNOVATION INC.
    Inventors: Kazuya Izumikawa, Shigeo Fujimoto
  • Patent number: 11973845
    Abstract: Managing organization disconnections from a shared resource of a communication platform is described. In a sharing approval repository of a communication platform, a shared resource can be associated with a host organization identifier and a non-host organization identifier. In an example, in response to receiving, from a user computing device associated with the host organization identifier or the non-host organization identifier, a resource disconnection request comprising a disconnecting organization identifier and a resource identifier associated with the shared resource, the sharing approval repository can be updated to add a disconnection indication for the resource identifier in association with the disconnecting organization identifier.
    Type: Grant
    Filed: November 6, 2021
    Date of Patent: April 30, 2024
    Assignee: Salesforce, Inc.
    Inventors: Christopher Sullivan, Myles Grant, Michael Demmer, Shanan Delp, Sri Vasamsetti
  • Patent number: 11972267
    Abstract: Tasks are selected for hibernation by recording user preferences for tasks having no penalty for hibernation and sleep; and assigning thresholds for battery power at which tasks are selected for a least one of hibernation and sleep. The assigning of the thresholds for battery power include considering current usage of hardware resources by a user and battery health per battery segment. A penalty score is determined for tasks based upon the user preferences for tasks having no penalty, and task performance including at least one of frequency of utilization, memory utilization, task dependency characteristics and task memory hierarchy. The penalty performance is a value including both the user preference and the task performance. Tasks can then be put into at least one of hibernation mode and sleep mode dictated by their penalty performance during the thresholds for battery power.
    Type: Grant
    Filed: October 4, 2022
    Date of Patent: April 30, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Madhu Pavan Kothapally, Rajesh Kumar Pirati, Bharath Sakthivel, Sarika Sinha
  • Patent number: 11971824
    Abstract: Disclosed is a method for enhancing memory utilization and throughput of a computing platform in training a deep neural network (DNN). The critical features of the method includes: calculating a memory size for every operation in a computational graph, storing the operations in the computational graph in multiple groups with the operations in each group being executable in parallel and a total memory size less than a memory threshold of a computational device, sequentially selecting a group and updating a prefetched group buffer, and simultaneously executing the group and prefetching data for a group in the prefetched group buffer to the corresponding computational device when the prefetched group buffer is update. Because of group execution and data prefetch, the memory utilization is optimized and the throughput is significantly increased to eliminate issues of out-of-memory and thrashing.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: April 30, 2024
    Assignee: AETHERAI IP HOLDING LLC
    Inventors: Chi-Chung Chen, Wei-Hsiang Yu, Chao-Yuan Yeh
  • Patent number: 11967321
    Abstract: Implementations set forth herein relate to an automated assistant that can interact with applications that may not have been pre-configured for interfacing with the automated assistant. The automated assistant can identify content of an application interface of the application to determine synonymous terms that a user may speak when commanding the automated assistant to perform certain tasks. Speech processing operations employed by the automated assistant can be biased towards these synonymous terms when the user is accessing an application interface of the application. In some implementations, the synonymous terms can be identified in a responsive language of the automated assistant when the content of the application interface is being rendered in a different language. This can allow the automated assistant to operate as an interface between the user and certain applications that may not be rendering content in a native language of the user.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: April 23, 2024
    Assignee: GOOGLE LLC
    Inventors: Joseph Lange, Abhanshu Sharma, Adam Coimbra, Gökhan Bakir, Gabriel Taubman, Ilya Firman, Jindong Chen, James Stout, Marcin Nowak-Przygodzki, Reed Enger, Thomas Weedon Hume, Vishwath Mohan, Jacek Szmigiel, Yunfan Jin, Kyle Pedersen, Gilles Baechler
  • Patent number: 11966756
    Abstract: The present disclosure generally relates to dataflow applications. In aspects, a system is disclosed for scheduling execution of feature services within a distributed data flow service (DDFS) framework. Further, the DDFS framework includes a main system-on-chip (SoC), at least one sensing service, and a plurality of feature services. Each of the plurality of feature services include a common pattern with an algorithm for processing the input data, a feature for encapsulating the algorithm into a generic wrapper rendering the algorithm compatible with other algorithms, a feature interface for encapsulating a feature output into a generic interface allowing generic communication with other feature services, and a configuration file including a scheduling policy to execute the feature services. For each of the plurality of feature services, processor(s) schedule the execution of a given feature service using the scheduling policy and execute a given feature service on the standard and/or accelerator cores.
    Type: Grant
    Filed: July 7, 2022
    Date of Patent: April 23, 2024
    Assignee: Aptiv Technologies AG
    Inventors: Vinod Aluvila, Miguel Angel Aguilar
  • Patent number: 11968098
    Abstract: A method of initiating a KPI backfill operation for a cellular network based on detecting a network data anomaly, where the method includes receiving, by an anomaly detection and backfill engine (ADBE) executed by a computing device, a data quality metric that is based on a KPI of the cellular network; detecting, by the ADBE, the network data anomaly based on the data quality metric being more than a threshold amount different than a predicted value for the data quality metric, where the network data anomaly indicates that at least a portion of a data stream from which the KPI is calculated was unavailable for a previous iteration of the KPI; and providing, by the ADBE and based on detecting the network data anomaly, a backfill command to a backfill processing pipeline to perform the backfill operation by reaggregating the KPI when the portion of the data stream becomes available.
    Type: Grant
    Filed: March 31, 2023
    Date of Patent: April 23, 2024
    Assignee: T-Mobile Innovations LLC
    Inventors: Vikas Ranjan, Raymond Wu
  • Patent number: 11966789
    Abstract: Systems and methods for optimal load distribution and data processing of a plurality of files in anti-malware solutions are provided herein. In some embodiments, the system includes: a plurality of node processors; a control processor programmed to: receiving a plurality of files used for malware analysis and training of anti-malware ML models; separating the plurality of files into a plurality of subsets of files based on byte size of each of the files, such that processing of each subset of files produces similar workloads amongst all available node processors; distributing the plurality of subsets of files amongst all available node processors such that each node processor processes its respective subset of files in parallel and within a similar timeframe as the other node processors; and receiving, by the control processor, a report of performance and/or anti-malware processing results of the subset of files performed from each node processor.
    Type: Grant
    Filed: April 27, 2022
    Date of Patent: April 23, 2024
    Assignee: UAB 360 IT
    Inventor: Mantas Briliauskas
  • Patent number: 11966619
    Abstract: An apparatus for executing a software program, comprising at least one hardware processor configured for: identifying in a plurality of computer instructions at least one remote memory access instruction and a following instruction following the at least one remote memory access instruction; executing after the at least one remote memory access instruction a sequence of other instructions, where the sequence of other instructions comprises a return instruction to execute the following instruction; and executing the following instruction; wherein executing the sequence of other instructions comprises executing an updated plurality of computer instructions produced by at least one of: inserting into the plurality of computer instructions the sequence of other instructions or at least one flow-control instruction to execute the sequence of other instructions; and replacing the at least one remote memory access instruction with at least one non-blocking memory access instruction.
    Type: Grant
    Filed: September 17, 2021
    Date of Patent: April 23, 2024
    Assignee: Next Silicon Ltd
    Inventors: Elad Raz, Yaron Dinkin
  • Patent number: 11968248
    Abstract: Methods are provided. A method includes announcing to a network meta information describing each of a plurality of distributed data sources. The method further includes propagating the meta information amongst routing elements in the network. The method also includes inserting into the network a description of distributed datasets that match a set of requirements of the analytics task. The method additionally includes delivering, by the routing elements, a copy of the analytics task to locations of respective ones of the plurality of distributed data sources that include the distributed datasets that match the set of requirements of the analytics task.
    Type: Grant
    Filed: October 19, 2022
    Date of Patent: April 23, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bong Jun Ko, Theodoros Salonidis, Rahul Urgaonkar, Dinesh C. Verma
  • Patent number: 11960940
    Abstract: A FaaS system comprises a plurality of execution nodes. A software package is received in the system, the software package comprising a function that is to be executed in the FaaS system. Data location information related to data that the function is going to access during execution is obtained. Based on the data location information, a determination is then made of an execution node in which the function is to be executed. The function is loaded into the determined execution node and executing in the determined execution node.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: April 16, 2024
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Zoltán Turányi, Dániel Géhberger
  • Patent number: 11962659
    Abstract: Metrics that characterize one or more computing devices are received. A time value associated with a performance of the one or more computing devices based on the received metrics is determined. A first scheduling parameter based on the time value is determined, wherein the first scheduling parameter is associated with a first discovery process that is associated with at least a portion of the one or more computing devices. Execution of the first discovery process is executed according to the first scheduling parameter.
    Type: Grant
    Filed: July 17, 2023
    Date of Patent: April 16, 2024
    Assignee: ServiceNow, Inc.
    Inventors: Steven W. Francis, Sai Saketh Nandagiri
  • Patent number: 11954352
    Abstract: A request to perform a first operation in a system that stores deduplicated data can be received. The system can include a data block stored at multiple logical address each referencing the data block. A reference count can be associated with the data block and can denote a number of logical addresses referencing the data block. Processing can be performed to service the request and perform the first operation, wherein the processing can include: acquiring a non-exclusive lock for a page that includes the reference count of the data block; storing, in a metadata log while holding the non-exclusive lock on the page, an entry to decrement the reference count of the data block; and releasing the non-exclusive lock on the page.
    Type: Grant
    Filed: June 29, 2022
    Date of Patent: April 9, 2024
    Assignee: Dell Products L.P.
    Inventors: Vladimir Shveidel, Uri Shabi
  • Patent number: 11953972
    Abstract: Selective privileged container augmentation is provided. A target group of edge devices is selected from a plurality of edge devices to run a plurality of child tasks comprising a pending task by mapping edge device tag attributes of the plurality of edge devices to child task tag attributes of the plurality of child tasks. A privileged container corresponding to the pending task is installed in each edge device of the target group to monitor execution of a child task by a given edge device of the target group. A privileged container installation tag that corresponds to the privileged container is added to an edge device tag attribute of each edge device of the target group having the privileged container installed. A child task of the plurality of child tasks comprising the pending task is sent to a selected edge device in the target group to run the child task.
    Type: Grant
    Filed: April 6, 2022
    Date of Patent: April 9, 2024
    Assignee: International Business Machines Corporation
    Inventors: Yue Wang, Xin Peng Liu, Wei Wu, Liang Wang, Biao Chai
  • Patent number: 11947454
    Abstract: Apparatuses, systems, and methods for controlling cache allocations in a configurable combined private and shared cache in a processor-based system. The processor-based system is configured to receive a cache allocation request to allocate a line in a share cache structure, which may further include a client identification (ID). The cache allocation request and the client ID can be compared to a sub-non-uniform memory access (NUMA) (sub-NUMA) bit mask and a client allocation bit mask to generate a cache allocation vector. The sub-NUMA bit mask may have been programmed to indicate that processing cores associated with a sub-NUMA region are available, whereas processing cores associated with other sub-NUMA regions are not available, and the client allocation bit mask may have been programmed to indicate that processing cores are available.
    Type: Grant
    Filed: June 7, 2022
    Date of Patent: April 2, 2024
    Assignee: Ampere Computing LLC
    Inventors: Richard James Shannon, Stephan Jean Jourdan, Matthew Robert Erler, Jared Eric Bendt