Process Scheduling Patents (Class 718/102)
-
Patent number: 12135731Abstract: In some implementations, a monitoring device may obtain information related to one or more extract, transform, and load (ETL) jobs scheduled in an ETL system. The monitoring device may generate ETL job metrics that include status information, timing information, and data volume information associated with one or more constituent tasks associated with the one or more ETL jobs, wherein the ETL job metrics include metrics related to extracting data records from a data source, transforming the data records into a target format, and/or loading the data records in the target format into a data sink. The monitoring device may enable capabilities to create or interact with one or more dashboards to visualize the ETL job metrics via a workspace accessible to one or more client devices. The monitoring device may invoke a messaging service to publish one or more notifications associated with the ETL job metrics via the workspace.Type: GrantFiled: January 13, 2021Date of Patent: November 5, 2024Assignee: Capital One Services, LLCInventors: Alex Makumbi, Andrew Stevens
-
Patent number: 12135984Abstract: The exemplary embodiments may provide an application management method and apparatus, and a device, to unfreeze some processes in an application. The method includes: obtaining an unfreezing event, where the unfreezing event includes process information, and the unfreezing event is used to trigger an unfreezing operation to be performed on some processes in a frozen application; and performing an unfreezing operation on the some processes based on the process information.Type: GrantFiled: August 2, 2021Date of Patent: November 5, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Huifeng Hu, Xiaojun Duan
-
Patent number: 12118057Abstract: A computing device, including a hardware accelerator configured to receive a first matrix and receive a second matrix. The hardware accelerator may, for a plurality of partial matrix regions, in a first iteration, read a first submatrix of the first matrix and a second submatrix of the second matrix into a front-end processing area. The hardware accelerator may multiply the first submatrix by the second submatrix to compute a first intermediate partial matrix. In each of one or more subsequent iterations, the hardware accelerator may read an additional submatrix into the front end processing area. The hardware accelerator may compute an additional intermediate partial matrix as a product of the additional submatrix and a submatrix reused from an immediately prior iteration. The hardware accelerator may compute each partial matrix as a sum of two or more of the intermediate partial matrices and may output the plurality of partial matrices.Type: GrantFiled: January 14, 2021Date of Patent: October 15, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Derek Edward Davout Gladding, Nitin Naresh Garegrat, Timothy Hume Heil, Balamurugan Kulanthivelu Veluchamy
-
Patent number: 12117882Abstract: A system having: a processor, wherein the processor is configured for executing a process of reducing power consumption that includes executing a first task over a first plurality of timeslots and a second task over a second plurality of timeslots, and wherein the processor is configured to: execute a real-time operating system (RTOS) process; determine that the first task is complete during a first timeslot of the first plurality of timeslots; and enter a low power mode for a reminder of the first timeslot upon determining that there is enough time to enter a low power mode during the first timeslot and a next timeslot is allocated to the first task, otherwise perform a dead-wait for the reminder of the first timeslot.Type: GrantFiled: March 15, 2023Date of Patent: October 15, 2024Assignee: HAMILTON SUNDSTRAND CORPORATIONInventor: Balaji Krishnakumar
-
Patent number: 12111674Abstract: An operating method of a system-on-chip (SoC) which includes a processor including a first core and a dynamic voltage and frequency scaling (DVFS) module and a clock management unit (CMU) for supplying an operating clock to the first core, the operating method including: obtaining a required performance of the first core; finding available frequencies meeting the required performance; obtaining information for calculating energy consumption for each of the available frequencies; calculating the energy consumption for each of the available frequencies, based on the information; determining a frequency, which causes minimum energy consumption, from among the available frequencies as an optimal frequency; and adjusting an operating frequency to be supplied to the first core to the optimal frequency.Type: GrantFiled: April 14, 2022Date of Patent: October 8, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Choonghoon Park, Jong-Lae Park, Bumgyu Park, Youngtae Lee, Donghee Han
-
Patent number: 12112156Abstract: A software update system according to one embodiment of the present disclosure is configured to update software used in a vehicle based on update data of the software, the update data being transmitted to the vehicle from an external device that is communicably connected to the vehicle. The software update system includes: a software update unit configured to update the software based on the update data; a vehicle data acquisition unit configured to acquire respective pieces of second vehicle data about states of the vehicle before and after the software update by the update unit; and an effect evaluation unit configured to evaluate an effect of the software update based on the respective pieces of second vehicle data before and after the software update.Type: GrantFiled: May 10, 2022Date of Patent: October 8, 2024Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Masafumi Yamamoto, Atsushi Tabata, Koichi Okuda, Yuki Makino
-
Patent number: 12106152Abstract: A cloud service system and an operation method thereof are provided. The cloud service system includes a first computing resource pool, a second computing resource pool, and a task dispatch server. Each computing platform in the first computing resource pool does not have a co-processor. Each computing platform in the second computing resource pool has at least one co-processor. The task dispatch server is configured to receive a plurality of tasks. The task dispatch server checks a task attribute of a task to be dispatched currently among the tacks. The task dispatch server chooses to dispatch the task to be dispatched currently to the first computing resource pool or to the second computing resource pool for execution according to the task attribute.Type: GrantFiled: September 8, 2021Date of Patent: October 1, 2024Assignee: Shanghai Biren Technology Co., LtdInventor: Xin Wang
-
Patent number: 12105607Abstract: Techniques are described for a data recovery validation test. In examples, a processor receives a command to be included in the validation test that is configured to validate performance of an activity by a server prior to a failure to perform the activity by the server. The processor stores the validation test including the command on a memory device, and prior to the failure of the activity by the server, executes the validation test including the command responsive to an input. The processor receives results of the validation test corresponding to the command and indicating whether the server performed the activity in accordance with a standard for the activity during the validation test. The processor provides the results of the validation test in a user interface.Type: GrantFiled: November 30, 2022Date of Patent: October 1, 2024Assignee: State Farm Mutual Automobile Insurance CompanyInventors: Victoria Michelle Passmore, Cesar Bryan Acosta, Christopher Chickoree, Mason Davenport, Ashish Desai, Sudha Kalyanasundaram, Christopher R. Lay, Emre Ozgener, Steven Stiles, Andrew Warner
-
Patent number: 12099453Abstract: Embodiments of the present disclosure relate to application partitioning for locality in a stacked memory system. In an embodiment, one or more memory dies are stacked on the processor die. The processor die includes multiple processing tiles and each memory die includes multiple memory tiles. Vertically aligned memory tiles are directly coupled to and comprise the local memory block for a corresponding processing tile. An application program that operates on dense multi-dimensional arrays (matrices) may partition the dense arrays into sub-arrays associated with program tiles. Each program tile is executed by a processing tile using the processing tile's local memory block to process the associated sub-array. Data associated with each sub-array is stored in a local memory block and the processing tile corresponding to the local memory block executes the program tile to process the sub-array data.Type: GrantFiled: March 30, 2022Date of Patent: September 24, 2024Assignee: NVIDIA CorporationInventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor
-
Patent number: 12099863Abstract: Aspects include providing isolation between a plurality of containers in a pod that are each executing on a different virtual machine (VM) on a host computer. Providing the isolation includes converting a data packet into a serial format for communicating with the host computer. The converted data packet is sent to a router executing on the host computer. The router determines a destination container in the plurality of containers based at least in part on content of the converted data packet and routes the converted data packet to the destination container.Type: GrantFiled: June 21, 2021Date of Patent: September 24, 2024Assignee: International Business Machines CorporationInventors: Qi Feng Huo, Wen Yi Gao, Si Bo Niu, Sen Wang
-
Patent number: 12099869Abstract: A scheduler, a method of operating the scheduler, and an electronic device including the scheduler are disclosed. The method of operating the scheduler configured to determine a model to be executed in an accelerator includes receiving one or more requests for execution of a plurality of models to be independently executed in the accelerator, and performing layer-wise scheduling on the models based on an idle time occurring when a candidate layer which is a target for the scheduling in each of the models is executed in the accelerator.Type: GrantFiled: March 9, 2021Date of Patent: September 24, 2024Assignees: Samsung Electronics Co., Ltd., SNU R&DB FOUNDATIONInventors: Seung Wook Lee, Younghwan Oh, Jaewook Lee, Sam Son, Yunho Jin, Taejun Ham
-
Patent number: 12099841Abstract: An embodiment of an apparatus comprises decode circuitry to decode a single instruction, the single instruction to include a field for an identifier of a first source operand, a field for an identifier of a destination operand, and a field for an opcode, the opcode to indicate execution circuitry is to program a user timer, and execution circuitry to execute the decoded instruction according to the opcode to retrieve timer program information from a location indicated by the first source operand, and program a user timer indicated by the destination operand based on the retrieved timer program information. Other embodiments are disclosed and claimed.Type: GrantFiled: March 25, 2021Date of Patent: September 24, 2024Assignee: Intel CorporationInventors: Rajesh Sankaran, Gilbert Neiger, Vedvyas Shanbhogue, David Koufaty
-
Patent number: 12093721Abstract: Provided are a method for processing data, an electronic device and a storage medium, which relate to the field of deep learning and data processing. The method may include: multiple target operators of a target model are acquired; the multiple target operators are divided into at least one operator group, according to an operation sequence of each of the multiple target operators in the target model, wherein at least one target operator in each of the at least one operator group is operated by the same processor and is operated within the same target operation period; and the at least one operator group is output.Type: GrantFiled: September 12, 2022Date of Patent: September 17, 2024Assignee: Beijing Baidu Netcom Science Technology Co., Ltd.Inventors: Tianfei Wang, Buhe Han, Zhen Chen, Lei Wang
-
Patent number: 12073247Abstract: A method for scheduling tasks includes receiving input that was acquired using one or more data collection devices, and scheduling one or more input tasks on one or more computing resources of a network, predicting one or more first tasks based in part on the input, assigning one or more placeholder tasks for the one or more predicted first tasks to the one or more computing resources based in part on a topology of the network, receiving one or more updates including an attribute of the one or more first tasks to be executed as input tasks are executed, modifying the one or more placeholder tasks based on the attribute of the one or more first tasks to be executed, and scheduling the one or more first tasks on the one or more computing resources by matching the one or more first tasks to the one or more placeholder tasks.Type: GrantFiled: December 5, 2022Date of Patent: August 27, 2024Assignee: SCHLUMBERGER TECHNOLOGY CORPORATIONInventor: Marvin Decker
-
Patent number: 12066795Abstract: An input device includes a movable input surface protruding from an electronic device. The input device enables force inputs along three axes relative to the electronic device: first lateral movements, second lateral movements, and axial movements. The input device includes force or displacement sensors which can detect a direction and magnitude of input forces.Type: GrantFiled: February 26, 2021Date of Patent: August 20, 2024Assignee: Apple Inc.Inventors: Colin M. Ely, Erik G. de Jong, Steven P. Cardinali
-
Patent number: 12068935Abstract: There is provided the apparatus comprising: at least one processor; and at least one memory comprising computer code that, when executed by the at least one processor, causes the apparatus to: identify a potential problem in a network comprising at least one network automation function; signal an indication of said potential problem to at least one network automation function of said network and a request for a proposal to address said problem; receive at least one proposal in response to said signalling; determine policy changes for addressing said potential problem in dependence on said at least one proposal; and implement said policy changes.Type: GrantFiled: February 19, 2021Date of Patent: August 20, 2024Assignee: NOKIA SOLUTIONS AND NETWORKS OYInventors: Stephen Mwanje, Darshan Ramesh
-
Patent number: 12061932Abstract: An apparatus in an illustrative embodiment comprises at least one processing device that includes a processor coupled to a memory. The at least one processing device is configured to establish with a coordination service for one or more distributed applications a participant identifier for a given participant in a multi-leader election algorithm implemented in a distributed computing system comprising multiple compute nodes, the compute nodes corresponding to participants having respective participant identifiers, and to interact with the coordination service in performing an iteration of the multi-leader election algorithm to determine a current assignment of respective ones of the participants as leaders for respective processing tasks of the distributed computing system. In some embodiments, the at least one processing device comprises at least a portion of a particular one of the compute nodes of the distributed computing system, and the coordination service comprises one or more external servers.Type: GrantFiled: December 27, 2021Date of Patent: August 13, 2024Assignee: Dell Products L.P.Inventors: Pan Xiao, Xuhui Yang
-
Patent number: 12061550Abstract: An apparatus is described. The apparatus includes a mass storage device processor that is to behave as an additional general purpose processing core of a computing system that a mass storage device having the mass storage device processor is to be coupled to, wherein, the mass storage device processor is to execute out of a component of main memory within the mass storage device.Type: GrantFiled: March 24, 2020Date of Patent: August 13, 2024Assignee: Intel CorporationInventors: Frank T. Hady, Sanjeev N. Trika
-
Patent number: 12045659Abstract: An algorithm for efficiently maintaining a globally uniform-in-time execution schedule for a dynamically changing set of periodic workload instances is provided. At a high level, the algorithm operates by gradually adjusting execution start times in the schedule until they converge to a globally uniform state. In certain embodiments, the algorithm exhibits the property of “quick convergence,” which means that regardless of the number of periodic workload instances added or removed, the execution start times for all workload instances in the schedule will typically converge to a globally uniform state within a single cycle length from the time of the addition/removal event(s) (subject to a tunable “aggressiveness” parameter).Type: GrantFiled: July 12, 2021Date of Patent: July 23, 2024Assignee: VMware LLCInventors: Danail Metodiev Grigorov, Nikolay Kolev Georgiev
-
Patent number: 12039366Abstract: This application discloses a task processing method, a system, a device, and a storage medium. The method includes: receiving a task published by a task publisher device and an electronic resource allocated for execution of the task; transmitting the task and the electronic resource to a blockchain network, to enable the blockchain network to construct a smart contract corresponding to the task and the electronic resource; transmitting the task to a task invitee device, to enable the task invitee device to execute the task; receiving an execution result corresponding to the task transmitted by a task invitee device after the task invitee device executes the task; and transmitting the execution result to the blockchain network, to enable the blockchain network to perform verification on the execution result according to the smart contract, and transfer the electronic resource to the task invitee device according to a verification result.Type: GrantFiled: January 20, 2021Date of Patent: July 16, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Jingyu Yang, Maogang Ma, Guize Liu, Jinsong Ma
-
Patent number: 12032883Abstract: The subject matter of this specification can be implemented in, among other things, a method that includes accessing a plurality of target tasks for a computing system, the computing system comprising a plurality of resources, wherein the plurality of resources comprises a first server and a second server, accessing a plurality of configurations of the computing system, wherein each of the plurality of configurations identifies one or more resources of the plurality of resources to perform the respective target task of the plurality of target tasks, and performing, for each of the plurality of configurations, a simulation to determine a plurality of performance metrics, wherein each of the plurality of performance metrics predicts performance of at least one of the plurality of resources executing the plurality of target tasks on the computing system.Type: GrantFiled: June 13, 2023Date of Patent: July 9, 2024Assignee: Parallels International GmbHInventors: Vasileios Koutsomanis, Igor Marnat, Nikolay Dobrovolskiy
-
Patent number: 12026518Abstract: An apparatus for parallel processing includes a memory and one or more processors, at least one of which operates a single instruction, multiple data (SIMD) model, and each of which are coupled to the memory. The processors are configured to process data samples associated with one or multiple chains or graphs of data processors, which chains or graphs describe processing steps to be executed repeatedly on data samples that are a subset of temporally ordered samples. The processors are additionally configured to dynamically schedule one or multiple sets of the samples associated with the one or multiple chains or graphs of data processors to reduce latency of processing of the data samples associated with a single chain or graph of data processors or different chains and graphs of data processors.Type: GrantFiled: September 13, 2022Date of Patent: July 2, 2024Assignee: BRAINGINES SAInventors: Markus Steinberger, Alexander Talashov, Aleksandrs Procopcuks, Vasilii Sumatokhin
-
Patent number: 12026383Abstract: An aspect of the invention relates to a method of managing jobs in a information system (SI) on which a plurality of jobs run, the information system (SI) comprising a plurality of computer nodes (NDi) and at least a first storage tier (NS1) associated with a first performance tier and a second storage tier (NS2) associated with a second performance tier lower than the first performance tier, each job being associated with a priority level determined from a set of parameters comprising the node or nodes (NDi) on which the job is to be executed, the method comprising a step of scheduling the jobs as a function of the priority level associated with each job; the set of parameters used for determining the priority level also comprising a first parameter relating to the storage tier to be used for the data necessary for the execution of the job in question and a second parameter relating to the position of the data necessary for the execution of the job (TAi) in question.Type: GrantFiled: June 30, 2022Date of Patent: July 2, 2024Assignee: BULL SASInventor: Jean-Olivier Gerphagnon
-
Patent number: 12028269Abstract: There are provided a method and an apparatus for cloud management, which selects optimal resources based on graphic processing unit (GPU) resource analysis in a large-scale container platform environment. According to an embodiment, a GPU bottleneck phenomenon occurring in an application of a large-scale container environment may be reduced by processing partitioned allocation of GPU resources, rather than existing 1:1 allocation, through real-time GPU data analysis (application of a threshold) and synthetic analysis of GPU performance degrading factors.Type: GrantFiled: November 9, 2022Date of Patent: July 2, 2024Assignee: Korea Electronics Technology InstituteInventors: Jae Hoon An, Young Hwan Kim
-
Patent number: 12028210Abstract: Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: marking of a request to define a marked request that includes associated metadata, wherein the metadata specifies action for performing by a resource interface associated to a production environment resource of a production environment, wherein the resource interface is configured for emulating functionality of the production environment resource; and sending the marked request to the resource interface for performance of the action specified by the metadata.Type: GrantFiled: November 20, 2019Date of Patent: July 2, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Samir Nasser, Kyle Brown
-
Patent number: 12019760Abstract: An information handling system includes a first memory having a trusted memory region, wherein the trusted memory region is an area of execution that is protected from processes running in the information handling system outside the trusted memory region. A secure cryptographic module may receive a request to create the trusted memory region from a dependent application, and create a mapping of the trusted memory region along with an enhanced page cache address range mapped to a non-uniform memory access (NUMA) node. The module may also detect a NUMA migration event of the dependent application, identify the trusted memory region corresponding to the NUMA migration event, and migrate the trusted memory region from the NUMA node to another NUMA node.Type: GrantFiled: February 25, 2021Date of Patent: June 25, 2024Assignee: Dell Products L.P.Inventors: Vinod Parackal Saby, Krishnaprasad Koladi, Gobind Vijayakumar
-
Patent number: 12020188Abstract: A task management platform generates an interactive display tasks based on multi-team activity data of different geographic locations across a plurality of distributed guided user interfaces (GUIs). Additionally the task management platform uses a distributed machine-learning based system to determine a suggested task item for a remote team based on multi-team activity data of different geographic locations.Type: GrantFiled: December 5, 2022Date of Patent: June 25, 2024Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANYInventors: Michael Shawn Jacob, Manali Desai, Leah Garcia, Oscar Allan Arulfo
-
Patent number: 12008399Abstract: A method, system and computer program product for optimizing scheduling of batch jobs are disclosed. The method may include obtaining, by one or more processors, a set of batch jobs, connection relationships among batch jobs in the set of batch jobs, and a respective execution time of each batch job in the set of batch jobs. The method may also include generating, by the one or more processors, a directed weighted graph for the set of batch jobs, wherein in the directed weighted graph, a node represents a batch job, a directed edge between two nodes represents a directed connection between two corresponding batch jobs, a weight of a node represents the execution time of the batch job corresponding to the node. The method may also include obtaining, by one or more processors, information of consumption of same resource(s) among the batch jobs in the set of batch jobs.Type: GrantFiled: December 15, 2020Date of Patent: June 11, 2024Assignee: International Business Machines CorporationInventors: Xi Bo Zhu, Shi Yu Wang, Xiao Xiao Pei, Qin Li, Lu Zhao
-
Patent number: 12008400Abstract: The disclosure relates to a method and a control server for scheduling a computing task including a plurality of tasks to be performed by computation servers.Type: GrantFiled: October 24, 2019Date of Patent: June 11, 2024Assignees: Samsung Electronics Co., Ltd., Korea University Research And Business FoundationInventors: Kisuk Kweon, Haneul Ko, Sangheon Pack, Jaewook Lee, Joonwoo Kim, Yujin Tae
-
Patent number: 12001513Abstract: A method for implementing a self-optimized video analytics pipeline is presented. The method includes decoding video files into a sequence of frames, extracting features of objects from one or more frames of the sequence of frames of the video files, employing an adaptive resource allocation component based on reinforcement learning (RL) to dynamically balance resource usage of different microservices included in the video analytics pipeline, employing an adaptive microservice parameter tuning component to balance accuracy and performance of a microservice of the different microservices, applying a graph-based filter to minimize redundant computations across the one or more frames of the sequence of frames, and applying a deep-learning-based filter to remove unnecessary computations resulting from mismatches between the different microservices in the video analytics pipeline.Type: GrantFiled: November 9, 2021Date of Patent: June 4, 2024Assignee: NEC CorporationInventors: Giuseppe Coviello, Yi Yang, Srimat Chakradhar
-
Patent number: 11989590Abstract: To provide a more efficient resource allocation method and system using a genetic algorithm (GA). The present technology includes a method for allocating resources to a production process including a plurality of processes, the method including allocating priorities to the plurality of processes, selecting processes executable at a first time among the plurality of processes and capable of allocating necessary resources, allocating the necessary resources to the selected processes in descending order of priorities, selecting processes executable at a second time that is later than the first time among the plurality of processes and capable of allocating necessary resources, and allocating the necessary resources to the selected processes in descending order of priorities. The present technology also includes, as a method of expressing genes of GA, not having direct allocation information for genes but having information (priority) for determining an order for allocation.Type: GrantFiled: March 20, 2020Date of Patent: May 21, 2024Assignee: SYNAPSE INNOVATION INC.Inventors: Kazuya Izumikawa, Shigeo Fujimoto
-
Patent number: 11973845Abstract: Managing organization disconnections from a shared resource of a communication platform is described. In a sharing approval repository of a communication platform, a shared resource can be associated with a host organization identifier and a non-host organization identifier. In an example, in response to receiving, from a user computing device associated with the host organization identifier or the non-host organization identifier, a resource disconnection request comprising a disconnecting organization identifier and a resource identifier associated with the shared resource, the sharing approval repository can be updated to add a disconnection indication for the resource identifier in association with the disconnecting organization identifier.Type: GrantFiled: November 6, 2021Date of Patent: April 30, 2024Assignee: Salesforce, Inc.Inventors: Christopher Sullivan, Myles Grant, Michael Demmer, Shanan Delp, Sri Vasamsetti
-
Patent number: 11972267Abstract: Tasks are selected for hibernation by recording user preferences for tasks having no penalty for hibernation and sleep; and assigning thresholds for battery power at which tasks are selected for a least one of hibernation and sleep. The assigning of the thresholds for battery power include considering current usage of hardware resources by a user and battery health per battery segment. A penalty score is determined for tasks based upon the user preferences for tasks having no penalty, and task performance including at least one of frequency of utilization, memory utilization, task dependency characteristics and task memory hierarchy. The penalty performance is a value including both the user preference and the task performance. Tasks can then be put into at least one of hibernation mode and sleep mode dictated by their penalty performance during the thresholds for battery power.Type: GrantFiled: October 4, 2022Date of Patent: April 30, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Madhu Pavan Kothapally, Rajesh Kumar Pirati, Bharath Sakthivel, Sarika Sinha
-
Patent number: 11971824Abstract: Disclosed is a method for enhancing memory utilization and throughput of a computing platform in training a deep neural network (DNN). The critical features of the method includes: calculating a memory size for every operation in a computational graph, storing the operations in the computational graph in multiple groups with the operations in each group being executable in parallel and a total memory size less than a memory threshold of a computational device, sequentially selecting a group and updating a prefetched group buffer, and simultaneously executing the group and prefetching data for a group in the prefetched group buffer to the corresponding computational device when the prefetched group buffer is update. Because of group execution and data prefetch, the memory utilization is optimized and the throughput is significantly increased to eliminate issues of out-of-memory and thrashing.Type: GrantFiled: September 9, 2020Date of Patent: April 30, 2024Assignee: AETHERAI IP HOLDING LLCInventors: Chi-Chung Chen, Wei-Hsiang Yu, Chao-Yuan Yeh
-
Patent number: 11967321Abstract: Implementations set forth herein relate to an automated assistant that can interact with applications that may not have been pre-configured for interfacing with the automated assistant. The automated assistant can identify content of an application interface of the application to determine synonymous terms that a user may speak when commanding the automated assistant to perform certain tasks. Speech processing operations employed by the automated assistant can be biased towards these synonymous terms when the user is accessing an application interface of the application. In some implementations, the synonymous terms can be identified in a responsive language of the automated assistant when the content of the application interface is being rendered in a different language. This can allow the automated assistant to operate as an interface between the user and certain applications that may not be rendering content in a native language of the user.Type: GrantFiled: November 30, 2021Date of Patent: April 23, 2024Assignee: GOOGLE LLCInventors: Joseph Lange, Abhanshu Sharma, Adam Coimbra, Gökhan Bakir, Gabriel Taubman, Ilya Firman, Jindong Chen, James Stout, Marcin Nowak-Przygodzki, Reed Enger, Thomas Weedon Hume, Vishwath Mohan, Jacek Szmigiel, Yunfan Jin, Kyle Pedersen, Gilles Baechler
-
Patent number: 11966756Abstract: The present disclosure generally relates to dataflow applications. In aspects, a system is disclosed for scheduling execution of feature services within a distributed data flow service (DDFS) framework. Further, the DDFS framework includes a main system-on-chip (SoC), at least one sensing service, and a plurality of feature services. Each of the plurality of feature services include a common pattern with an algorithm for processing the input data, a feature for encapsulating the algorithm into a generic wrapper rendering the algorithm compatible with other algorithms, a feature interface for encapsulating a feature output into a generic interface allowing generic communication with other feature services, and a configuration file including a scheduling policy to execute the feature services. For each of the plurality of feature services, processor(s) schedule the execution of a given feature service using the scheduling policy and execute a given feature service on the standard and/or accelerator cores.Type: GrantFiled: July 7, 2022Date of Patent: April 23, 2024Assignee: Aptiv Technologies AGInventors: Vinod Aluvila, Miguel Angel Aguilar
-
Patent number: 11968098Abstract: A method of initiating a KPI backfill operation for a cellular network based on detecting a network data anomaly, where the method includes receiving, by an anomaly detection and backfill engine (ADBE) executed by a computing device, a data quality metric that is based on a KPI of the cellular network; detecting, by the ADBE, the network data anomaly based on the data quality metric being more than a threshold amount different than a predicted value for the data quality metric, where the network data anomaly indicates that at least a portion of a data stream from which the KPI is calculated was unavailable for a previous iteration of the KPI; and providing, by the ADBE and based on detecting the network data anomaly, a backfill command to a backfill processing pipeline to perform the backfill operation by reaggregating the KPI when the portion of the data stream becomes available.Type: GrantFiled: March 31, 2023Date of Patent: April 23, 2024Assignee: T-Mobile Innovations LLCInventors: Vikas Ranjan, Raymond Wu
-
Patent number: 11966789Abstract: Systems and methods for optimal load distribution and data processing of a plurality of files in anti-malware solutions are provided herein. In some embodiments, the system includes: a plurality of node processors; a control processor programmed to: receiving a plurality of files used for malware analysis and training of anti-malware ML models; separating the plurality of files into a plurality of subsets of files based on byte size of each of the files, such that processing of each subset of files produces similar workloads amongst all available node processors; distributing the plurality of subsets of files amongst all available node processors such that each node processor processes its respective subset of files in parallel and within a similar timeframe as the other node processors; and receiving, by the control processor, a report of performance and/or anti-malware processing results of the subset of files performed from each node processor.Type: GrantFiled: April 27, 2022Date of Patent: April 23, 2024Assignee: UAB 360 ITInventor: Mantas Briliauskas
-
Patent number: 11966619Abstract: An apparatus for executing a software program, comprising at least one hardware processor configured for: identifying in a plurality of computer instructions at least one remote memory access instruction and a following instruction following the at least one remote memory access instruction; executing after the at least one remote memory access instruction a sequence of other instructions, where the sequence of other instructions comprises a return instruction to execute the following instruction; and executing the following instruction; wherein executing the sequence of other instructions comprises executing an updated plurality of computer instructions produced by at least one of: inserting into the plurality of computer instructions the sequence of other instructions or at least one flow-control instruction to execute the sequence of other instructions; and replacing the at least one remote memory access instruction with at least one non-blocking memory access instruction.Type: GrantFiled: September 17, 2021Date of Patent: April 23, 2024Assignee: Next Silicon LtdInventors: Elad Raz, Yaron Dinkin
-
Patent number: 11968248Abstract: Methods are provided. A method includes announcing to a network meta information describing each of a plurality of distributed data sources. The method further includes propagating the meta information amongst routing elements in the network. The method also includes inserting into the network a description of distributed datasets that match a set of requirements of the analytics task. The method additionally includes delivering, by the routing elements, a copy of the analytics task to locations of respective ones of the plurality of distributed data sources that include the distributed datasets that match the set of requirements of the analytics task.Type: GrantFiled: October 19, 2022Date of Patent: April 23, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Bong Jun Ko, Theodoros Salonidis, Rahul Urgaonkar, Dinesh C. Verma
-
Patent number: 11960940Abstract: A FaaS system comprises a plurality of execution nodes. A software package is received in the system, the software package comprising a function that is to be executed in the FaaS system. Data location information related to data that the function is going to access during execution is obtained. Based on the data location information, a determination is then made of an execution node in which the function is to be executed. The function is loaded into the determined execution node and executing in the determined execution node.Type: GrantFiled: May 29, 2018Date of Patent: April 16, 2024Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Zoltán Turányi, Dániel Géhberger
-
Patent number: 11962659Abstract: Metrics that characterize one or more computing devices are received. A time value associated with a performance of the one or more computing devices based on the received metrics is determined. A first scheduling parameter based on the time value is determined, wherein the first scheduling parameter is associated with a first discovery process that is associated with at least a portion of the one or more computing devices. Execution of the first discovery process is executed according to the first scheduling parameter.Type: GrantFiled: July 17, 2023Date of Patent: April 16, 2024Assignee: ServiceNow, Inc.Inventors: Steven W. Francis, Sai Saketh Nandagiri
-
Patent number: 11954352Abstract: A request to perform a first operation in a system that stores deduplicated data can be received. The system can include a data block stored at multiple logical address each referencing the data block. A reference count can be associated with the data block and can denote a number of logical addresses referencing the data block. Processing can be performed to service the request and perform the first operation, wherein the processing can include: acquiring a non-exclusive lock for a page that includes the reference count of the data block; storing, in a metadata log while holding the non-exclusive lock on the page, an entry to decrement the reference count of the data block; and releasing the non-exclusive lock on the page.Type: GrantFiled: June 29, 2022Date of Patent: April 9, 2024Assignee: Dell Products L.P.Inventors: Vladimir Shveidel, Uri Shabi
-
Patent number: 11953972Abstract: Selective privileged container augmentation is provided. A target group of edge devices is selected from a plurality of edge devices to run a plurality of child tasks comprising a pending task by mapping edge device tag attributes of the plurality of edge devices to child task tag attributes of the plurality of child tasks. A privileged container corresponding to the pending task is installed in each edge device of the target group to monitor execution of a child task by a given edge device of the target group. A privileged container installation tag that corresponds to the privileged container is added to an edge device tag attribute of each edge device of the target group having the privileged container installed. A child task of the plurality of child tasks comprising the pending task is sent to a selected edge device in the target group to run the child task.Type: GrantFiled: April 6, 2022Date of Patent: April 9, 2024Assignee: International Business Machines CorporationInventors: Yue Wang, Xin Peng Liu, Wei Wu, Liang Wang, Biao Chai
-
Patent number: 11947454Abstract: Apparatuses, systems, and methods for controlling cache allocations in a configurable combined private and shared cache in a processor-based system. The processor-based system is configured to receive a cache allocation request to allocate a line in a share cache structure, which may further include a client identification (ID). The cache allocation request and the client ID can be compared to a sub-non-uniform memory access (NUMA) (sub-NUMA) bit mask and a client allocation bit mask to generate a cache allocation vector. The sub-NUMA bit mask may have been programmed to indicate that processing cores associated with a sub-NUMA region are available, whereas processing cores associated with other sub-NUMA regions are not available, and the client allocation bit mask may have been programmed to indicate that processing cores are available.Type: GrantFiled: June 7, 2022Date of Patent: April 2, 2024Assignee: Ampere Computing LLCInventors: Richard James Shannon, Stephan Jean Jourdan, Matthew Robert Erler, Jared Eric Bendt
-
Patent number: 11949566Abstract: Methods, systems, and computer readable media for testing a system under test (SUT). An example system includes a distributed processing node emulator configured for emulating a multi-processing node distributed computing system using a processing node communications model and generating intra-processing node communications and inter-processing node communications in the multi-processing node distributed computing system. At least a portion of the inter-processing node communications comprises one or more messages communicated with the SUT by way of a switching fabric. The system includes a test execution manager configured for managing the distributed processing node emulator to execute a pre-defined test case, monitoring the SUT, and outputting a test report based on monitoring the SUT during execution of the pre-defined test case.Type: GrantFiled: September 6, 2022Date of Patent: April 2, 2024Assignee: KEYSIGHT TECHNOLOGIES, INC.Inventors: Winston Wencheng Liu, Dan Mihailescu, Matthew R. Bergeron
-
Patent number: 11946368Abstract: A system to determine a contamination level of a formation fluid, the system including a formation tester tool to be positioned in a borehole, wherein the borehole has a mixture of the formation fluid and a drilling fluid and the formation tester tool includes a sensor to detect time series measurements from a plurality of sensor channels. The system includes a processor to dimensionally reduce the time series measurements to generate a set of reduced measurement scores in a multi-dimensional measurement space and determine an end member in the multi-dimensional measurement space based on the set of reduced measurement scores, wherein the end member comprises a position in the multi-dimensional measurement space that corresponds with a predetermined fluid concentration. The processor also determines the contamination level of the formation fluid at a time point based the set of reduced measurement scores and the end member.Type: GrantFiled: December 16, 2022Date of Patent: April 2, 2024Assignee: Halliburton Energy Services, Inc.Inventors: Bin Dai, Dingding Chen, Christopher Michael Jones
-
Patent number: 11941722Abstract: A kernel comprising at least one dynamically configurable parameter is submitted by a processor. The kernel is to be executed at a later time. Data is received after the kernel has been submitted. The at least one dynamically configurable parameter of the kernel is updated based on the data. The kernel having the at least one updated dynamically configurable parameter is executed after the at least one dynamically configurable parameter has been updated.Type: GrantFiled: October 13, 2021Date of Patent: March 26, 2024Assignee: Mellanox Technologies, Ltd.Inventors: Sayantan Sur, Stephen Anthony Bernard Jones, Shahaf Shuler
-
Patent number: 11941494Abstract: Systems and methods for developing enterprise machine learning (ML) models within a notebook application are described. The system may include a notebook application, a packaging service, and an online ML platform. The method may include initiating a runtime environment within the notebook application, creating a plurality of files based on a notebook recipe template, generating a prototype model within the data science notebook application by accessing the plurality of files through the runtime environment, generating a production recipe including the runtime environment and the plurality of files, and publishing the production recipe to the online ML platform.Type: GrantFiled: May 13, 2019Date of Patent: March 26, 2024Assignee: ADOBE INC.Inventors: Pari Sawant, Shankar Srinivasan, Nirmal Mani
-
Patent number: 11934873Abstract: A first processing unit such as a graphics processing unit (GPU) pipelines that execute commands and a scheduler to schedule one or more first commands for execution by one or more of the pipelines. The one or more first commands are received from a user mode driver in a second processing unit such as a central processing unit (CPU). The scheduler schedules one or more second commands for execution in response to completing execution of the one or more first commands and without notifying the second processing unit. In some cases, the first processing unit includes a direct memory access (DMA) engine that writes blocks of information from the first processing unit to a memory. The one or more second commands program the DMA engine to write a block of information including results generated by executing the one or more first commands.Type: GrantFiled: September 16, 2022Date of Patent: March 19, 2024Assignee: Advanced Micro Devices, Inc.Inventor: Rex Eldon McCrary