Process Scheduling Patents (Class 718/102)
-
Patent number: 12254291Abstract: Disclosed is a method performed by a computing device for implementing a Graphical User Interface (GUI) providing a development environment, the method including: setting, by a computing device, a plurality of code blocks; designating two or more execution target blocks among the plurality of code blocks; constructing one or more pipelines defining a relationship between the two or more execution target blocks and connecting the two or more execution target blocks; and executing at least some of the two or more execution target blocks based on the connection relationship of the one or more pipelines.Type: GrantFiled: February 27, 2023Date of Patent: March 18, 2025Assignee: MakinaRocks Co., LTDInventors: Dae Sung Kim, Hooncheol Shin, Hwiyeon Cho, Sangwoo Shim, Byoungwan Kim
-
Patent number: 12253930Abstract: An embodiment includes initiating a first cycle of a process using a first number of threads that operate in parallel to collectively execute the process and collect performance data. The embodiment aggregates the performance data and computes a first idle duration based at least in part on the aggregated performance data. The embodiment projects a thread-count recommendation based at least in part on a mathematical model that includes the first number of threads as an input number of threads, the first idle and cycle durations as input idle and cycle durations, respectively, and a second number of threads as an output variable representative of an output number of threads, where the output number of threads is determined as a function of the input idle duration. The embodiment initiates a second cycle of the process using the second number of threads output as a projection by the mathematical model.Type: GrantFiled: October 19, 2021Date of Patent: March 18, 2025Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Salman Zia Rana, Aleksandar Micic
-
Patent number: 12248810Abstract: The method performs at the orchestration interface at which update information, including changes to tasks of a workflow, is received from a task manager system (TMS), where the workflow includes a set of tasks, inputs to the tasks, and outputs from the tasks. The inputs and outputs determine runtime dependencies between the tasks. Based on the update information received, the orchestration interface populates a topology of nodes and edges as a directed acyclic graph (DAG) that maps nodes to tasks and edges to runtime dependencies between tasks, based on node inputs and outputs. The orchestration interface instructs the execution of the tasks and handling dependencies by interacting with a task execution system (TES) and by traversing the DAG, the orchestration interface identifies tasks that depend on completed tasks as per the runtime dependencies and instructs the TES to execute the dependent tasks identified.Type: GrantFiled: June 15, 2022Date of Patent: March 11, 2025Assignee: International Business Machines CorporationInventors: Anton Zorin, Manish Kesarwani, Niels Dominic Pardon, Ritesh Kumar Gupta, Sameep Mehta
-
Patent number: 12248808Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to relocate a compute thread, the apparatus comprising control circuitry to maintain a location of a plurality of domain access counters associated with a plurality of compute-memory domains for a first compute thread, and an execution monitor to set a first domain access counter of the plurality of domain access counters, the first domain access counter associated with a first compute-memory domain of the compute-memory domains, and relocate the first compute thread to a second compute-memory domain of the compute-memory domains in response to a comparison between a second domain access counter associated with the second compute-memory domain and the first domain access counter.Type: GrantFiled: June 26, 2021Date of Patent: March 11, 2025Assignee: INTEL CORPORATIONInventors: Rolf Riesen, Robert Wisniewski, Rajesh Poornachandran
-
Patent number: 12250119Abstract: The current document is directed to an infrastructure-as-code (“IaC”) cloud-infrastructure-management service or system that automatically generates parameterized cloud templates that represent already deployed cloud-based infrastructure, including virtual networks, virtual machines, load balancers, and connection topologies. The IaC cloud-infrastructure manager provides an infrastructure-discovery service that accesses a cloud-computing facility to obtain information about already deployed cloud infrastructure and that generates a textual description of the deployed infrastructure, which the IaC cloud-infrastructure-manager then transforms into a set of parameterized cloud-infrastructure-specification-and-configuration files, a resource_ids file, and a parameters file that together comprise a parameterized cloud template.Type: GrantFiled: October 17, 2023Date of Patent: March 11, 2025Assignee: VMware LLCInventors: Priyank Agarwal, Praveen Kumar, Valentina Leonidovna Reutova, Thomas Hatch, Charles McMarrow, Murali Sampangiramaiah
-
Patent number: 12236249Abstract: Methods and systems for managing data processing systems are provided. Data processing systems may host various computer-implemented services. Data processing systems may also be changed to operate in different states where one or more currently hosted computer-implemented services may no longer be hosted in the new state. Removal of these no longer hosted computer-implemented services from the data processing systems may cause complications for the data processing systems in the new state. Undefine policies may be preconfigured for one or more hosted computer-implemented services to prevent occurrence of such complications.Type: GrantFiled: June 7, 2023Date of Patent: February 25, 2025Assignee: Dell Products L.P.Inventors: Bradley K. Goodman, Kirk Alan Hutchinson, Joseph Caisse
-
Patent number: 12236267Abstract: Techniques described herein relate to a method for managing a distributed multi-tiered computing (DMC) environment. The method includes decomposing, by a local controller associated with an DMC domain, a service dependency graph associated with a scheduling job; assigning normalized compute units and normalized network units to tasks included in the service dependency graph; generating a Q-table using the service dependency graph and reinforcement Q-learning; calculating a critical path and a max learned path using the Q-table and the service dependency graph; calculating the earliest start time and the latest start time for each task using the service dependency graph and the max learned path to obtain a plurality of earliest start time and latest start time pairs for each task; and generating scheduling assignments using the plurality of earliest start time and latest start time pairs for each task.Type: GrantFiled: April 15, 2022Date of Patent: February 25, 2025Assignee: DELL PRODUCTS L.P.Inventors: William Jeffery White, Said Tabet
-
Patent number: 12236133Abstract: A storage device includes a nonvolatile memory device and a storage controller. The storage controller accesses the nonvolatile memory device based on a request of an external host device. The storage controller sends a signal to the external host device, based a throughput of accessing the nonvolatile memory device being within a specific range.Type: GrantFiled: January 14, 2022Date of Patent: February 25, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jaehwan Lim, Jae Eun Kim, Ji-Hoon Kim, Walter Jun, Jungwoo Lee, Seung-Woo Lim
-
Patent number: 12229584Abstract: Data queries that are agnostic to any particular data source may include a data source alias. The data source alias may be replaced with a data source identifier to obtain a data query configured for a target data source. Data processing jobs may be agnostic to any particular data processing platform. A data processing job may include a data processing task that is agnostic to any particular data processing platform. A code library may provide platform-specific code configured to implement a data processing task on a data processing platform. A data query configured for a particular data source and a data processing task configured for a particular data processing platform may be used to create a data processing job. Configurations that restrict execution of a data processing job to execution via an interactive development environment may be removed to allow its execution directly at the data processing platform itself.Type: GrantFiled: January 4, 2024Date of Patent: February 18, 2025Assignee: Capital One Services, LLCInventors: Timothy Haggerty, Yuting Zhou, Venu Kumar Nannapaneni, Pravin Nair, Hussein Ali Khalif Samao
-
Patent number: 12222840Abstract: A method of generating metrics data associated with a microservices-based application comprises ingesting a plurality of spans and mapping an ingested span of the plurality of spans to a span identity, wherein the span identity comprises a tuple of information identifying a type of span associated with the span identity, wherein the tuple of information comprises user-configured dimensions. The method further comprises grouping the ingested span by the span identity, wherein the ingested span is grouped with other spans from the plurality of spans comprising a same span identity. The method also comprises computing metrics associated with the span identity and using the metrics to generate a stream of metric data associated with the span identity.Type: GrantFiled: October 26, 2022Date of Patent: February 11, 2025Assignee: SPLUNK Inc.Inventors: Steven Karis, Maxime Petazzoni, Matthew William Pound, Joseph Ari Ross, Charles Smith, Scott Stewart
-
Patent number: 12210906Abstract: Apparatuses, systems, methods, and program products are disclosed for techniques for distributed computing and storage. An apparatus includes a processor and a memory that includes code that is executable to receive a request to perform a storage task, transmit at least a portion of the storage task to a plurality of user node devices, receive results of the at least a portion of the storage task from at least one of the plurality of user node devices, and transmit the received results.Type: GrantFiled: February 8, 2024Date of Patent: January 28, 2025Assignee: ASEARIS DATA SYSTEMS, INC.Inventors: Erich Pletsch, Matt Morris
-
Patent number: 12204830Abstract: Provided in the present invention is a high-throughput material simulation calculation optimization method based on time prediction, relating to the field of materials science. The method comprises: first constructing a prediction model of task configurations and corresponding time predictions, and using the prediction model to generate the execution time of all of the tasks in a high-throughput material simulation calculation under different conditions; then generating an optimal scheduling plan for each model in the high-throughput material simulation calculation by means of directed graphs; and, according to the optimal scheduling plan for each model, sequentially executing all of the tasks until all of the tasks are completed. Further, a high-throughput computing simulation optimization apparatus based on a time prediction and a storage medium are provided.Type: GrantFiled: January 7, 2021Date of Patent: January 21, 2025Assignee: TSINGHUA UNIVERSITYInventors: Zhihui Du, Chongyu Wang
-
Patent number: 12200070Abstract: A method, performed by an electronic device, of transmitting a mobile edge application, includes obtaining information related to an execution environment of at least one pre-installed mobile edge application, receiving an installation request for a new mobile edge application, determining a mobile edge computing host for installing the new mobile edge application, based on the information related to the execution environment and the requirement information related to an execution environment of the new mobile edge application, and transmitting the new mobile edge application to the determined mobile edge computing host.Type: GrantFiled: September 26, 2022Date of Patent: January 14, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Heejung Kim, Changbae Yoon, Chihyun Cho
-
Patent number: 12197948Abstract: Minimizing an energy use of virtual machines at one or more information handling systems, including receiving a plurality of computing tasks, each task associated with an energy efficiency indicator; positioning each of the tasks within a task queue indicating an order of execution of the tasks based on the energy efficiency indicator for each task; identifying a plurality of virtual machines, each virtual machine associated with a thermal efficiency indicator based on a historical energy usage of the virtual machine; sorting the virtual machines to identify a distribution of the virtual machines based on the thermal efficiency indicator of the respective virtual machines; allocating the virtual machines to execute the tasks based on i) the distribution of the virtual machines and ii) the task queue; and executing the tasks by the virtual machines based on the allocation.Type: GrantFiled: March 4, 2021Date of Patent: January 14, 2025Assignee: Dell Products L.P.Inventor: Deeder M. Aurongzeb
-
Patent number: 12190404Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a data entity that causes a processing unit to process a computational graph. In one aspect, a method includes the actions of receiving data identifying a computational graph, the computational graph including a plurality of nodes representing operations; obtaining compilation artifacts for processing the computational graph on a processing unit; and generating a data entity from the compilation artifacts, wherein the data entity, when invoked, causes the processing unit to process the computational graph by executing the operations represented by the plurality of nodes.Type: GrantFiled: June 6, 2019Date of Patent: January 7, 2025Assignee: Google LLCInventors: Jingyue Wu, Christopher Daniel Leary
-
Patent number: 12190210Abstract: A method of using a computing device to manage a lifecycle of machine learning models includes receiving, by a computing device, multiple pre-defined machine learning lifecycle tasks. The computing device manages executing a management-layer software layer for the multiple pre-defined machine learning lifecycle tasks. The computing device further generates and updates a machine learning pipeline using the management-layer software layer.Type: GrantFiled: December 17, 2021Date of Patent: January 7, 2025Assignee: International Business Machines CorporationInventors: Benjamin Herta, Darrell Christopher Reimer, Evelyn Duesterwald, Gaodan Fang, Punleuk Oum, Debashish Saha, Archit Verma
-
Patent number: 12182011Abstract: A system, method and computer program product configured to control a plurality of parallel programs operating in an n-dimensional hierarchical iteration space over an n-dimensional data space, comprising: a processor and a memory configured to accommodate the plurality of parallel programs and the data space; a memory access control decoder configured to decode memory location references to regions of the n-dimensional data space from indices in the plurality of parallel programs; and an execution orchestrator responsive to the memory access control decoder and configured to sequence regions of the n-dimensional hierarchical iteration space of the plurality of parallel programs to honour a data requirement of at least a first of the plurality of parallel programs having a data dependency on at least a second of the plurality of parallel programs.Type: GrantFiled: January 31, 2023Date of Patent: December 31, 2024Assignee: Arm LimitedInventor: Kévin Petit
-
Patent number: 12164952Abstract: An apparatus to facilitate barrier state save and restore for preemption in a graphics environment is disclosed. The apparatus includes processing resources to execute a plurality of execution threads that are comprised in a thread group (TG) and mid-thread preemption barrier save and restore hardware circuitry to: initiate an exception handling routine in response to a mid-thread preemption event, the exception handling routine to cause a barrier signaling event to be issued; receive indication of a valid designated thread status for a thread of a thread group (TG) in response to the barrier signaling event; and in response to receiving the indication of the valid designated thread status for the thread of the TG, cause, by the thread of the TG having the valid designated thread status, a barrier save routine and a barrier restore routine to be initiated for named barriers of the TG.Type: GrantFiled: June 25, 2021Date of Patent: December 10, 2024Assignee: INTEL CORPORATIONInventors: Vasanth Ranganathan, James Valerio, Joydeep Ray, Abhishek R. Appu, Alan Curtis, Prathamesh Raghunath Shinde, Brandon Fliflet, Ben J. Ashbaugh, John Wiegert
-
Patent number: 12158812Abstract: An example system can include: at least one processor; and non-transitory computer-readable storage storing instructions that, when executed by the at least one processor, cause the system to: generate an ingestion manager programmed to ingest data associated with a job; and generate a logging manager programmed to capture metadata associated with the job; wherein the ingestion manager is programmed to automatically retry the job based upon the metadata captured by the logging manager.Type: GrantFiled: May 13, 2022Date of Patent: December 3, 2024Assignee: Wells Fargo Bank, N.A.Inventors: Jashua Thejas Arul Dhas, Ganesh Kumar, Marimuthu Muthan, Aditya Kulkarni, Sai Raghavendra Neralla, Anshul Chauhan
-
Patent number: 12153959Abstract: A method for detecting a traffic ramp-up rule violation includes receiving data element retrieval requests from an information retrieval system and determining a requests per second (RPS) for a key range. The method also includes determining a moving average of RPS for the key range. The method also includes determining a number of delta violations, each delta violation comprising a respective beginning instance in time when the RPS exceeded a delta RPS limit. For each delta violation, the method includes determining a maximum conforming load for the key range over and determining whether the RPS exceeded the maximum conforming load for the key range based on the beginning instance in time of the respective delta violation. When the RPS has exceeded the maximum conforming load, the method includes determining that the delta violation corresponds to a full-history violation indicative of a degradation of performance of the information retrieval system.Type: GrantFiled: October 25, 2022Date of Patent: November 26, 2024Assignee: Google LLCInventors: Arash Parsa, Joshua Melcon, David Gay, Ryan Huebsch
-
Patent number: 12153915Abstract: A method performed by a processing system including at least one processor includes applying a contextual filter to mask a portion of at least one of: an input of a software application, an output of the software application, or an underlying dataset of the software application, where the contextual filter simulates a limitation of a user of the software application, executing the software application with the contextual filter applied to the at least one of: the input of the software application, the output of the software application, or the underlying dataset of the software application, collecting ambient data during the executing, and recommending, based on a result of the executing, a modification to the software application to improve at least one of: an accessibility of the software application or an inclusion of the software application.Type: GrantFiled: September 19, 2022Date of Patent: November 26, 2024Assignee: AT&T Intellectual Property I, L.P.Inventors: Yaron Kanza, Balachander Krishnamurthy, Divesh Srivastava
-
Patent number: 12153530Abstract: A data processing system includes a memory system including a memory device storing data and a controller performing a data program operation or a data read operation with the memory device, and a host suitable for requesting the data program operation or the data read operation from the memory system. The controller can perform a serial communication to control a memory which is arranged outside the memory system and engaged with the host.Type: GrantFiled: April 11, 2023Date of Patent: November 26, 2024Assignee: SK hynix Inc.Inventor: Jong-Min Lee
-
Patent number: 12135731Abstract: In some implementations, a monitoring device may obtain information related to one or more extract, transform, and load (ETL) jobs scheduled in an ETL system. The monitoring device may generate ETL job metrics that include status information, timing information, and data volume information associated with one or more constituent tasks associated with the one or more ETL jobs, wherein the ETL job metrics include metrics related to extracting data records from a data source, transforming the data records into a target format, and/or loading the data records in the target format into a data sink. The monitoring device may enable capabilities to create or interact with one or more dashboards to visualize the ETL job metrics via a workspace accessible to one or more client devices. The monitoring device may invoke a messaging service to publish one or more notifications associated with the ETL job metrics via the workspace.Type: GrantFiled: January 13, 2021Date of Patent: November 5, 2024Assignee: Capital One Services, LLCInventors: Alex Makumbi, Andrew Stevens
-
Patent number: 12135984Abstract: The exemplary embodiments may provide an application management method and apparatus, and a device, to unfreeze some processes in an application. The method includes: obtaining an unfreezing event, where the unfreezing event includes process information, and the unfreezing event is used to trigger an unfreezing operation to be performed on some processes in a frozen application; and performing an unfreezing operation on the some processes based on the process information.Type: GrantFiled: August 2, 2021Date of Patent: November 5, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Huifeng Hu, Xiaojun Duan
-
Patent number: 12117882Abstract: A system having: a processor, wherein the processor is configured for executing a process of reducing power consumption that includes executing a first task over a first plurality of timeslots and a second task over a second plurality of timeslots, and wherein the processor is configured to: execute a real-time operating system (RTOS) process; determine that the first task is complete during a first timeslot of the first plurality of timeslots; and enter a low power mode for a reminder of the first timeslot upon determining that there is enough time to enter a low power mode during the first timeslot and a next timeslot is allocated to the first task, otherwise perform a dead-wait for the reminder of the first timeslot.Type: GrantFiled: March 15, 2023Date of Patent: October 15, 2024Assignee: HAMILTON SUNDSTRAND CORPORATIONInventor: Balaji Krishnakumar
-
Patent number: 12118057Abstract: A computing device, including a hardware accelerator configured to receive a first matrix and receive a second matrix. The hardware accelerator may, for a plurality of partial matrix regions, in a first iteration, read a first submatrix of the first matrix and a second submatrix of the second matrix into a front-end processing area. The hardware accelerator may multiply the first submatrix by the second submatrix to compute a first intermediate partial matrix. In each of one or more subsequent iterations, the hardware accelerator may read an additional submatrix into the front end processing area. The hardware accelerator may compute an additional intermediate partial matrix as a product of the additional submatrix and a submatrix reused from an immediately prior iteration. The hardware accelerator may compute each partial matrix as a sum of two or more of the intermediate partial matrices and may output the plurality of partial matrices.Type: GrantFiled: January 14, 2021Date of Patent: October 15, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Derek Edward Davout Gladding, Nitin Naresh Garegrat, Timothy Hume Heil, Balamurugan Kulanthivelu Veluchamy
-
Patent number: 12111674Abstract: An operating method of a system-on-chip (SoC) which includes a processor including a first core and a dynamic voltage and frequency scaling (DVFS) module and a clock management unit (CMU) for supplying an operating clock to the first core, the operating method including: obtaining a required performance of the first core; finding available frequencies meeting the required performance; obtaining information for calculating energy consumption for each of the available frequencies; calculating the energy consumption for each of the available frequencies, based on the information; determining a frequency, which causes minimum energy consumption, from among the available frequencies as an optimal frequency; and adjusting an operating frequency to be supplied to the first core to the optimal frequency.Type: GrantFiled: April 14, 2022Date of Patent: October 8, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Choonghoon Park, Jong-Lae Park, Bumgyu Park, Youngtae Lee, Donghee Han
-
Patent number: 12112156Abstract: A software update system according to one embodiment of the present disclosure is configured to update software used in a vehicle based on update data of the software, the update data being transmitted to the vehicle from an external device that is communicably connected to the vehicle. The software update system includes: a software update unit configured to update the software based on the update data; a vehicle data acquisition unit configured to acquire respective pieces of second vehicle data about states of the vehicle before and after the software update by the update unit; and an effect evaluation unit configured to evaluate an effect of the software update based on the respective pieces of second vehicle data before and after the software update.Type: GrantFiled: May 10, 2022Date of Patent: October 8, 2024Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Masafumi Yamamoto, Atsushi Tabata, Koichi Okuda, Yuki Makino
-
Patent number: 12106152Abstract: A cloud service system and an operation method thereof are provided. The cloud service system includes a first computing resource pool, a second computing resource pool, and a task dispatch server. Each computing platform in the first computing resource pool does not have a co-processor. Each computing platform in the second computing resource pool has at least one co-processor. The task dispatch server is configured to receive a plurality of tasks. The task dispatch server checks a task attribute of a task to be dispatched currently among the tacks. The task dispatch server chooses to dispatch the task to be dispatched currently to the first computing resource pool or to the second computing resource pool for execution according to the task attribute.Type: GrantFiled: September 8, 2021Date of Patent: October 1, 2024Assignee: Shanghai Biren Technology Co., LtdInventor: Xin Wang
-
Patent number: 12105607Abstract: Techniques are described for a data recovery validation test. In examples, a processor receives a command to be included in the validation test that is configured to validate performance of an activity by a server prior to a failure to perform the activity by the server. The processor stores the validation test including the command on a memory device, and prior to the failure of the activity by the server, executes the validation test including the command responsive to an input. The processor receives results of the validation test corresponding to the command and indicating whether the server performed the activity in accordance with a standard for the activity during the validation test. The processor provides the results of the validation test in a user interface.Type: GrantFiled: November 30, 2022Date of Patent: October 1, 2024Assignee: State Farm Mutual Automobile Insurance CompanyInventors: Victoria Michelle Passmore, Cesar Bryan Acosta, Christopher Chickoree, Mason Davenport, Ashish Desai, Sudha Kalyanasundaram, Christopher R. Lay, Emre Ozgener, Steven Stiles, Andrew Warner
-
Patent number: 12099863Abstract: Aspects include providing isolation between a plurality of containers in a pod that are each executing on a different virtual machine (VM) on a host computer. Providing the isolation includes converting a data packet into a serial format for communicating with the host computer. The converted data packet is sent to a router executing on the host computer. The router determines a destination container in the plurality of containers based at least in part on content of the converted data packet and routes the converted data packet to the destination container.Type: GrantFiled: June 21, 2021Date of Patent: September 24, 2024Assignee: International Business Machines CorporationInventors: Qi Feng Huo, Wen Yi Gao, Si Bo Niu, Sen Wang
-
Patent number: 12099453Abstract: Embodiments of the present disclosure relate to application partitioning for locality in a stacked memory system. In an embodiment, one or more memory dies are stacked on the processor die. The processor die includes multiple processing tiles and each memory die includes multiple memory tiles. Vertically aligned memory tiles are directly coupled to and comprise the local memory block for a corresponding processing tile. An application program that operates on dense multi-dimensional arrays (matrices) may partition the dense arrays into sub-arrays associated with program tiles. Each program tile is executed by a processing tile using the processing tile's local memory block to process the associated sub-array. Data associated with each sub-array is stored in a local memory block and the processing tile corresponding to the local memory block executes the program tile to process the sub-array data.Type: GrantFiled: March 30, 2022Date of Patent: September 24, 2024Assignee: NVIDIA CorporationInventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor
-
Patent number: 12099869Abstract: A scheduler, a method of operating the scheduler, and an electronic device including the scheduler are disclosed. The method of operating the scheduler configured to determine a model to be executed in an accelerator includes receiving one or more requests for execution of a plurality of models to be independently executed in the accelerator, and performing layer-wise scheduling on the models based on an idle time occurring when a candidate layer which is a target for the scheduling in each of the models is executed in the accelerator.Type: GrantFiled: March 9, 2021Date of Patent: September 24, 2024Assignees: Samsung Electronics Co., Ltd., SNU R&DB FOUNDATIONInventors: Seung Wook Lee, Younghwan Oh, Jaewook Lee, Sam Son, Yunho Jin, Taejun Ham
-
Patent number: 12099841Abstract: An embodiment of an apparatus comprises decode circuitry to decode a single instruction, the single instruction to include a field for an identifier of a first source operand, a field for an identifier of a destination operand, and a field for an opcode, the opcode to indicate execution circuitry is to program a user timer, and execution circuitry to execute the decoded instruction according to the opcode to retrieve timer program information from a location indicated by the first source operand, and program a user timer indicated by the destination operand based on the retrieved timer program information. Other embodiments are disclosed and claimed.Type: GrantFiled: March 25, 2021Date of Patent: September 24, 2024Assignee: Intel CorporationInventors: Rajesh Sankaran, Gilbert Neiger, Vedvyas Shanbhogue, David Koufaty
-
Patent number: 12093721Abstract: Provided are a method for processing data, an electronic device and a storage medium, which relate to the field of deep learning and data processing. The method may include: multiple target operators of a target model are acquired; the multiple target operators are divided into at least one operator group, according to an operation sequence of each of the multiple target operators in the target model, wherein at least one target operator in each of the at least one operator group is operated by the same processor and is operated within the same target operation period; and the at least one operator group is output.Type: GrantFiled: September 12, 2022Date of Patent: September 17, 2024Assignee: Beijing Baidu Netcom Science Technology Co., Ltd.Inventors: Tianfei Wang, Buhe Han, Zhen Chen, Lei Wang
-
Patent number: 12073247Abstract: A method for scheduling tasks includes receiving input that was acquired using one or more data collection devices, and scheduling one or more input tasks on one or more computing resources of a network, predicting one or more first tasks based in part on the input, assigning one or more placeholder tasks for the one or more predicted first tasks to the one or more computing resources based in part on a topology of the network, receiving one or more updates including an attribute of the one or more first tasks to be executed as input tasks are executed, modifying the one or more placeholder tasks based on the attribute of the one or more first tasks to be executed, and scheduling the one or more first tasks on the one or more computing resources by matching the one or more first tasks to the one or more placeholder tasks.Type: GrantFiled: December 5, 2022Date of Patent: August 27, 2024Assignee: SCHLUMBERGER TECHNOLOGY CORPORATIONInventor: Marvin Decker
-
Patent number: 12066795Abstract: An input device includes a movable input surface protruding from an electronic device. The input device enables force inputs along three axes relative to the electronic device: first lateral movements, second lateral movements, and axial movements. The input device includes force or displacement sensors which can detect a direction and magnitude of input forces.Type: GrantFiled: February 26, 2021Date of Patent: August 20, 2024Assignee: Apple Inc.Inventors: Colin M. Ely, Erik G. de Jong, Steven P. Cardinali
-
Patent number: 12068935Abstract: There is provided the apparatus comprising: at least one processor; and at least one memory comprising computer code that, when executed by the at least one processor, causes the apparatus to: identify a potential problem in a network comprising at least one network automation function; signal an indication of said potential problem to at least one network automation function of said network and a request for a proposal to address said problem; receive at least one proposal in response to said signalling; determine policy changes for addressing said potential problem in dependence on said at least one proposal; and implement said policy changes.Type: GrantFiled: February 19, 2021Date of Patent: August 20, 2024Assignee: NOKIA SOLUTIONS AND NETWORKS OYInventors: Stephen Mwanje, Darshan Ramesh
-
Patent number: 12061932Abstract: An apparatus in an illustrative embodiment comprises at least one processing device that includes a processor coupled to a memory. The at least one processing device is configured to establish with a coordination service for one or more distributed applications a participant identifier for a given participant in a multi-leader election algorithm implemented in a distributed computing system comprising multiple compute nodes, the compute nodes corresponding to participants having respective participant identifiers, and to interact with the coordination service in performing an iteration of the multi-leader election algorithm to determine a current assignment of respective ones of the participants as leaders for respective processing tasks of the distributed computing system. In some embodiments, the at least one processing device comprises at least a portion of a particular one of the compute nodes of the distributed computing system, and the coordination service comprises one or more external servers.Type: GrantFiled: December 27, 2021Date of Patent: August 13, 2024Assignee: Dell Products L.P.Inventors: Pan Xiao, Xuhui Yang
-
Patent number: 12061550Abstract: An apparatus is described. The apparatus includes a mass storage device processor that is to behave as an additional general purpose processing core of a computing system that a mass storage device having the mass storage device processor is to be coupled to, wherein, the mass storage device processor is to execute out of a component of main memory within the mass storage device.Type: GrantFiled: March 24, 2020Date of Patent: August 13, 2024Assignee: Intel CorporationInventors: Frank T. Hady, Sanjeev N. Trika
-
Patent number: 12045659Abstract: An algorithm for efficiently maintaining a globally uniform-in-time execution schedule for a dynamically changing set of periodic workload instances is provided. At a high level, the algorithm operates by gradually adjusting execution start times in the schedule until they converge to a globally uniform state. In certain embodiments, the algorithm exhibits the property of “quick convergence,” which means that regardless of the number of periodic workload instances added or removed, the execution start times for all workload instances in the schedule will typically converge to a globally uniform state within a single cycle length from the time of the addition/removal event(s) (subject to a tunable “aggressiveness” parameter).Type: GrantFiled: July 12, 2021Date of Patent: July 23, 2024Assignee: VMware LLCInventors: Danail Metodiev Grigorov, Nikolay Kolev Georgiev
-
Patent number: 12039366Abstract: This application discloses a task processing method, a system, a device, and a storage medium. The method includes: receiving a task published by a task publisher device and an electronic resource allocated for execution of the task; transmitting the task and the electronic resource to a blockchain network, to enable the blockchain network to construct a smart contract corresponding to the task and the electronic resource; transmitting the task to a task invitee device, to enable the task invitee device to execute the task; receiving an execution result corresponding to the task transmitted by a task invitee device after the task invitee device executes the task; and transmitting the execution result to the blockchain network, to enable the blockchain network to perform verification on the execution result according to the smart contract, and transfer the electronic resource to the task invitee device according to a verification result.Type: GrantFiled: January 20, 2021Date of Patent: July 16, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Jingyu Yang, Maogang Ma, Guize Liu, Jinsong Ma
-
Patent number: 12032883Abstract: The subject matter of this specification can be implemented in, among other things, a method that includes accessing a plurality of target tasks for a computing system, the computing system comprising a plurality of resources, wherein the plurality of resources comprises a first server and a second server, accessing a plurality of configurations of the computing system, wherein each of the plurality of configurations identifies one or more resources of the plurality of resources to perform the respective target task of the plurality of target tasks, and performing, for each of the plurality of configurations, a simulation to determine a plurality of performance metrics, wherein each of the plurality of performance metrics predicts performance of at least one of the plurality of resources executing the plurality of target tasks on the computing system.Type: GrantFiled: June 13, 2023Date of Patent: July 9, 2024Assignee: Parallels International GmbHInventors: Vasileios Koutsomanis, Igor Marnat, Nikolay Dobrovolskiy
-
Patent number: 12026383Abstract: An aspect of the invention relates to a method of managing jobs in a information system (SI) on which a plurality of jobs run, the information system (SI) comprising a plurality of computer nodes (NDi) and at least a first storage tier (NS1) associated with a first performance tier and a second storage tier (NS2) associated with a second performance tier lower than the first performance tier, each job being associated with a priority level determined from a set of parameters comprising the node or nodes (NDi) on which the job is to be executed, the method comprising a step of scheduling the jobs as a function of the priority level associated with each job; the set of parameters used for determining the priority level also comprising a first parameter relating to the storage tier to be used for the data necessary for the execution of the job in question and a second parameter relating to the position of the data necessary for the execution of the job (TAi) in question.Type: GrantFiled: June 30, 2022Date of Patent: July 2, 2024Assignee: BULL SASInventor: Jean-Olivier Gerphagnon
-
Patent number: 12026518Abstract: An apparatus for parallel processing includes a memory and one or more processors, at least one of which operates a single instruction, multiple data (SIMD) model, and each of which are coupled to the memory. The processors are configured to process data samples associated with one or multiple chains or graphs of data processors, which chains or graphs describe processing steps to be executed repeatedly on data samples that are a subset of temporally ordered samples. The processors are additionally configured to dynamically schedule one or multiple sets of the samples associated with the one or multiple chains or graphs of data processors to reduce latency of processing of the data samples associated with a single chain or graph of data processors or different chains and graphs of data processors.Type: GrantFiled: September 13, 2022Date of Patent: July 2, 2024Assignee: BRAINGINES SAInventors: Markus Steinberger, Alexander Talashov, Aleksandrs Procopcuks, Vasilii Sumatokhin
-
Patent number: 12028269Abstract: There are provided a method and an apparatus for cloud management, which selects optimal resources based on graphic processing unit (GPU) resource analysis in a large-scale container platform environment. According to an embodiment, a GPU bottleneck phenomenon occurring in an application of a large-scale container environment may be reduced by processing partitioned allocation of GPU resources, rather than existing 1:1 allocation, through real-time GPU data analysis (application of a threshold) and synthetic analysis of GPU performance degrading factors.Type: GrantFiled: November 9, 2022Date of Patent: July 2, 2024Assignee: Korea Electronics Technology InstituteInventors: Jae Hoon An, Young Hwan Kim
-
Patent number: 12028210Abstract: Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: marking of a request to define a marked request that includes associated metadata, wherein the metadata specifies action for performing by a resource interface associated to a production environment resource of a production environment, wherein the resource interface is configured for emulating functionality of the production environment resource; and sending the marked request to the resource interface for performance of the action specified by the metadata.Type: GrantFiled: November 20, 2019Date of Patent: July 2, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Samir Nasser, Kyle Brown
-
Patent number: 12020188Abstract: A task management platform generates an interactive display tasks based on multi-team activity data of different geographic locations across a plurality of distributed guided user interfaces (GUIs). Additionally the task management platform uses a distributed machine-learning based system to determine a suggested task item for a remote team based on multi-team activity data of different geographic locations.Type: GrantFiled: December 5, 2022Date of Patent: June 25, 2024Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANYInventors: Michael Shawn Jacob, Manali Desai, Leah Garcia, Oscar Allan Arulfo
-
Patent number: 12019760Abstract: An information handling system includes a first memory having a trusted memory region, wherein the trusted memory region is an area of execution that is protected from processes running in the information handling system outside the trusted memory region. A secure cryptographic module may receive a request to create the trusted memory region from a dependent application, and create a mapping of the trusted memory region along with an enhanced page cache address range mapped to a non-uniform memory access (NUMA) node. The module may also detect a NUMA migration event of the dependent application, identify the trusted memory region corresponding to the NUMA migration event, and migrate the trusted memory region from the NUMA node to another NUMA node.Type: GrantFiled: February 25, 2021Date of Patent: June 25, 2024Assignee: Dell Products L.P.Inventors: Vinod Parackal Saby, Krishnaprasad Koladi, Gobind Vijayakumar
-
Patent number: 12008399Abstract: A method, system and computer program product for optimizing scheduling of batch jobs are disclosed. The method may include obtaining, by one or more processors, a set of batch jobs, connection relationships among batch jobs in the set of batch jobs, and a respective execution time of each batch job in the set of batch jobs. The method may also include generating, by the one or more processors, a directed weighted graph for the set of batch jobs, wherein in the directed weighted graph, a node represents a batch job, a directed edge between two nodes represents a directed connection between two corresponding batch jobs, a weight of a node represents the execution time of the batch job corresponding to the node. The method may also include obtaining, by one or more processors, information of consumption of same resource(s) among the batch jobs in the set of batch jobs.Type: GrantFiled: December 15, 2020Date of Patent: June 11, 2024Assignee: International Business Machines CorporationInventors: Xi Bo Zhu, Shi Yu Wang, Xiao Xiao Pei, Qin Li, Lu Zhao