Context Switching Patents (Class 718/108)
  • Patent number: 11971830
    Abstract: An example method may include determining whether a preemption flag associated with a first input/output (I/O) handling thread is equal to a first value indicating that preemption of the first I/O queue handling thread is forthcoming, wherein the first I/O queue handling thread is executing on a first processor, the first I/O queue handling thread is associated with a first set of one or more queue identifiers, and each queue identifier identifies a queue being handled by the first I/O queue handling thread, and, responsive to determining that the preemption flag is equal to the first value, transferring the first set of one or more queue identifiers to a second I/O queue handling thread executing on a second processor. Transferring the first set of queue identifiers may include removing the one or more queue identifiers from the first set.
    Type: Grant
    Filed: March 22, 2022
    Date of Patent: April 30, 2024
    Assignee: Red Hat, Inc.
    Inventor: Michael Tsirkin
  • Patent number: 11954492
    Abstract: Techniques are disclosed relating to channel stalls or deactivations based on the latency of prior operations. In some embodiments, a processor includes a plurality of channel pipelines for a plurality of channels and a plurality of execution pipelines shared by the channel pipelines and configured to perform different types of operations provided by the channel pipelines. First scheduler circuitry may assign threads to channels and second scheduler circuitry may assign an operation from a given channel to a given execution pipeline based on decode of an operation for that channel. Dependency circuitry may, for a first operation that depends on a prior operation that uses one of the execution pipelines, determine, based on status information for the prior operation from the one of the execution pipelines, whether to stall the first operation or to deactivate a thread that includes the first operation from its assigned channel.
    Type: Grant
    Filed: November 10, 2022
    Date of Patent: April 9, 2024
    Assignee: Apple Inc.
    Inventors: Benjiman L. Goodman, Dzung Q. Vu, Robert Kenney
  • Patent number: 11954621
    Abstract: Aspects of the present disclosure relate to personal protective equipment (PPE) management. A set of personal protective equipment (PPE) data describing use time limits of respective PPE articles of a set of PPE articles can be received. Use of a PPE article of the set of PPE articles can be monitored using one or more sensors. A determination can be made whether a PPE usage rule of the PPE article is satisfied based on the monitoring, where the PPE usage rule is based on at least a use time limit of the PPE article. A PPE recommendation action can be issued in response to determining that the PPE usage rule of the PPE article is satisfied.
    Type: Grant
    Filed: December 2, 2021
    Date of Patent: April 9, 2024
    Assignee: International Business Machines Corporation
    Inventors: Stan Kevin Daley, Rhonda L. Childress, Jeremy R. Fox, Michael Bender
  • Patent number: 11948001
    Abstract: Methods and apparatus consistent with the present disclosure may be used in environments where multiple different virtual sets of program instructions are executed by shared computing resources. These methods may allow actions associated with a first set of virtual software to be paused to allow a second set of virtual software to be executed by the shared computing resources. In certain instances, methods and apparatus consistent with the present disclosure may manage the operation of one or more sets of virtual software at a point in time. Apparatus consistent with the present disclosure may include a memory and one or more processors that execute instructions out of the memory. At certain points in time, a processors of a computing system may pause a virtual process while allowing instructions associated with another virtual process to be executed.
    Type: Grant
    Filed: June 17, 2021
    Date of Patent: April 2, 2024
    Assignee: SONICWALL INC.
    Inventors: Miao Mao, Wei Zhou, Zhong Chen
  • Patent number: 11907711
    Abstract: Aspects of the invention include systems and methods configured to efficiently evaluate the efforts of a code migration (e.g., porting task) between different platforms. A non-limiting example computer-implemented method includes receiving a function of a source platform. The function can include a plurality of fields. An initial vector is constructed for each of the plurality of fields. The initial vector encodes a value of the respective field according to an encoding rule. The initial vectors are merged into a single final vector and the final vector is classified into one of a plurality of system function families of the source platform. A vector of a target platform at a minimum distance to the final vector is identified and an assessment is provided that includes a difficulty in porting a project comprising the function between the source platform and the target platform based at least in part on the minimum distance.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: February 20, 2024
    Assignee: International Business Machines Corporation
    Inventors: Shuang Shuang Jia, Yi Chai, Xiao-Yu Li, Xin Zhao, Li Cao, Jiangang Deng, Hua Wei Fan, Zhou Wen Ya, Hong Wei Sun
  • Patent number: 11907774
    Abstract: Systems and methods are disclosed for swapping or changing between stacks associated with respective applications when one application calls the other.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: February 20, 2024
    Assignee: Lutron Technology Company LLC
    Inventors: Nathan B. Elsishans, Francois Carouge
  • Patent number: 11868306
    Abstract: A processing system includes a processing unit and a memory device. The memory device includes a processing-in-memory (PIM) module that performs processing operations on behalf of the processing unit. An instruction set architecture (ISA) of the PIM module has fewer instructions than an ISA of the processing unit. Instructions received from the processing unit are translated such that processing resources of the PIM module are virtualized. As a result, the PIM module concurrently performs processing operations for multiple threads or applications of the processing unit.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: January 9, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Michael L. Chu, Ashwin Aji, Muhammad Amber Hassaan
  • Patent number: 11847502
    Abstract: A device, that provides serverless computing, receives a request to execute multiple jobs, and determines criteria for each of the plurality of jobs, wherein the criteria for each of the multiple jobs includes at least one of job posting criteria, job validation criteria, job retry criteria, or a disaster recovery criteria. The device stores information associated with the multiple jobs in a repository, wherein the information associated with the multiple jobs includes the criteria for each of the multiple jobs. The device provides a particular job, of the multiple jobs, to a cluster computing framework for execution, determines modified criteria for the particular job, and provides the modified criteria for the particular job to the cluster computing framework. The device receives, from the cluster computing framework, information indicating that execution of the particular job is complete, and validates a success of completion of the execution of the particular job.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: December 19, 2023
    Assignee: Capital One Services, LLC
    Inventors: Ashwini Kumar, Lakshmi Narasimha Sarma Kattamuri
  • Patent number: 11789778
    Abstract: An FPGA cloud platform acceleration resource allocates and coordinates accelerator card resources according to delays between a host of a user and FPGA accelerator cards deployed at various network segments. Upon an FPGA usage request of the user, allocating an FPGA accelerator card in an FPGA resource pool that has a minimum delay to the host. A cloud monitoring management platform obtains_transmission delays to a virtual machine network according to different geographic locations of various FPGA cards in the FPGA resource pool, and allocating a card having a minimum delay to each user. The cloud monitoring management platform prevents unauthorized users from accessing acceleration resources in the resource pool. The invention protects FPGA accelerator cards that are not authorized for users, and ensures that the card allocated to a user has a minimum network delay, thereby optimizing acceleration performance, and improving user experience.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: October 17, 2023
    Assignee: Inspur Suzhou Intelligent Technology Co., Ltd.
    Inventors: Zhixin Ren, Jiaheng Fan
  • Patent number: 11765257
    Abstract: An example network device includes a primary node and a standby node. The primary node includes one or more processors implemented in circuitry and configured to execute an operating system providing an application space and a kernel space, execute a replication application in the application space to receive a write function call including data to be written to a socket of the operating system and to send a representation of the data to a replication driver executed in the kernel space, execute the replication driver to send the representation of the data to a replication module executed in the kernel space, and execute the replication module to send the representation of the data to the standby node and, after receiving an acknowledgement from the standby node, to send the data to the socket.
    Type: Grant
    Filed: November 8, 2017
    Date of Patent: September 19, 2023
    Assignee: Juniper Networks, Inc.
    Inventors: Sameer Seth, Abhishek Sudhakar Mudumbi, Murali Mohan Krishnamurthy
  • Patent number: 11714759
    Abstract: Techniques are disclosed relating to private memory management using a mapping thread, which may be persistent. In some embodiments, a graphics processor is configured to generate a pool of private memory pages for a set of graphics work that includes multiple threads. The processor may maintain a translation table configured to map private memory addresses to virtual addresses based on identifiers of the threads. The processor may execute a mapping thread to receive a request to allocate a private memory page for a requesting thread, select a private memory page from the pool in response to the request, and map the selected page in the translation table for the requesting. The processor may then execute one or more instructions of the requesting thread to access a private memory space, wherein the execution includes translation of a private memory address to a virtual address based on the mapped page in the translation table.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: August 1, 2023
    Assignee: Apple Inc.
    Inventors: Benjiman L. Goodman, Terence M. Potter, Anjana Rajendran, Mark I. Luffel, William V. Miller
  • Patent number: 11704127
    Abstract: A data processing system includes processing circuitry for executing context-data-dependent program instructions which are decoded by decoder circuitry. Such context-data-dependent program instructions perform processing which is dependent upon currently existing context data. As an example, the context-data-dependent program instructions may be floating point instructions and the context data may be rounding mode information. The decoder circuitry supports a context save instruction which saves context data when it is marked as having been used and saves default context data when the current context data is marked as not having been used. The decoder circuitry further supports a context restore instruction which restores context data when the current context data is marked as having been used and permits the current context data to continue for future use when it is marked as currently unused.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: July 18, 2023
    Assignee: Arm Limited
    Inventors: Thomas Christopher Grocutt, François Christopher Jacques Botman, Bradley John Smith
  • Patent number: 11677836
    Abstract: A server apparatus is communicably connected to multiple information processing devices and is configured to manage a session in which content data are transmitted and received between the multiple information processing devices. A communication management unit is configured to manage a connection to the session by each of the information processing devices. An information management unit is configured to receive a request from at least one of the information processing devices and to associate collateral information about an environment with the session. An information transmission unit is configured to transmit the collateral information to the at least one of the information processing devices.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: June 13, 2023
    Assignee: Ricoh Company, Ltd.
    Inventors: Kota Ogasawara, Tomoko Utsumi, Emi Machida
  • Patent number: 11669618
    Abstract: An information handling system may include a processor and a basic input/output system (BIOS) comprising a program of instructions comprising boot firmware configured to be the first code executed by the processor when the information handling system is booted or powered on, the BIOS configured to, during boot of the information handling system: (i) read a predefined measurement of an order of loading of BIOS drivers configured to execute during execution of the BIOS, such predefined measurement made during build of the BIOS; (ii) perform a runtime measurement of an order of loading of the BIOS drivers during actual runtime of the information handling system; (iii) compare the predefined measurement to the runtime measurement; and (iv) responsive to a mismatch between the predefined measurement and the runtime measurement, respond with a remedial action.
    Type: Grant
    Filed: April 21, 2021
    Date of Patent: June 6, 2023
    Assignee: Dell Products L.P.
    Inventors: Balasingh P. Samuel, Richard M. Tonry, Jonathan D. Samuel
  • Patent number: 11663021
    Abstract: A basic input/output system provides an interface for a core aggregation layout that identifies a grouping of processor cores into core aggregations, wherein each of the core aggregations is associated with a maximum allowable C-state. A processor may monitor an information handling system during operation of an application to gather data associated with latency sensitivity of the application, update the core aggregation layout based on the data gathered during the operation of the application, and pin a thread for execution to one of the processor cores based on the latency sensitivity of the application and the maximum allowable C-state.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: May 30, 2023
    Assignee: Dell Products L.P.
    Inventors: John Christopher Beckett, Mukund P. Khatri
  • Patent number: 11630790
    Abstract: An integrated circuit is provided, which includes: a processor, a general interrupt controller, and a bus master. The bus master includes: a bus-control circuit and a polling circuit, which is configured to detect whether an interrupt signal of the sensing device is asserted. In response to the polling circuit detecting that the interrupt signal is asserted, the bus-control circuits fetches each task stored in a task queue of a memory in sequence, and performs one or more data-transfer operations corresponding to each task to obtain sensor data from the sensing device. In response to a task-completion signal of the tasks generated by the bus-control circuit, the general interrupt controller generates an interrupt request signal. In response to the interrupt request signal, the processor reports a sensor event using the sensor data obtained by the data-transfer operations corresponding to each task.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: April 18, 2023
    Assignee: MEDIATEK SINGAPORE PTE. LTD.
    Inventors: Hui Zhang, Peng Zhou, Shi Ma, Fei Yin, Wulin Li
  • Patent number: 11615023
    Abstract: A processing system has at least one internal processing unit and associated memory. The memory is accessible by at least two other independent processing units, and the memory of the at least one internal processing unit includes a data structure shared by the at least two other independent processing units that are allowed to perform direct memory writes into the shared data structure. A dedicated set of one or more bits in the shared data structure is allocated to each one of the at least two other independent processing units, each bit or each group of bits in the shared data structure indicates a unique combination of independent processing unit and application handler for handling an application in relation to the corresponding independent processing unit. Preparation and/or activation of the application handler indicated by the set bit or the set group of bits is initiated.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: March 28, 2023
    Assignee: Telefonaktiebolaget LM Ericsson (Publ)
    Inventors: Per Holmberg, Leif Johansson
  • Patent number: 11589931
    Abstract: A robotic surgical system and method are disclosed for transitioning control to a secondary robotic arm controller. In one embodiment, a robotic surgical system comprises a user console comprising a display device and a user input device; a robotic arm configured to be coupled to an operating table; a primary robotic arm controller configured to move the robotic arm in response to a signal received from the user input device at the user console; and a secondary robotic arm controller configured to move the robotic arm in response to a signal received from a user input device remote from the user console. Control over movement of the robotic arm is transitioned from the primary robotic arm controller to the secondary robotic arm controller in response to a failure in the primary robotic arm controller. Other embodiments are provided.
    Type: Grant
    Filed: August 4, 2020
    Date of Patent: February 28, 2023
    Assignee: Verb Surgical Inc.
    Inventor: Jignesh Desai
  • Patent number: 11556374
    Abstract: Compiler-optimized context switching may include receiving an instruction indicating a preferred preemption point comprising an instruction address; storing the preferred preemption point in a data structure; determining, based on the data structure, that the preferred preemption point has been reached by a first thread; determining that preemption of the first thread for a second thread has been requested; and performing a context switch to the second thread.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: January 17, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Kelvin D. Nilsen
  • Patent number: 11467836
    Abstract: An acceleration unit including a primary core and a secondary core is provided. The primary core includes a first on-chip memory, a primary core sequencer adapted to decode a received first cross-core copy instruction, and a primary core memory copy engine adapted to acquire a first operand from a first address in the first on-chip memory and copy the acquired first operand to a second address in a second on-chip memory of the secondary core. Further, the secondary core includes a second on-chip memory, a secondary core sequencer adapted to decode a received second cross-core copy instruction, and a secondary core memory copy engine adapted to acquire the first operand from the second address in the second on-chip memory and copy the acquired first operand back to the first address in the first on-chip memory.
    Type: Grant
    Filed: January 26, 2021
    Date of Patent: October 11, 2022
    Assignee: Alibaba Group Holding Limited
    Inventors: Jun He, Li Yin, Xuejun Wu
  • Patent number: 11468001
    Abstract: A processing system includes a processing unit and a memory device. The memory device includes a processing-in-memory (PIM) module that performs processing operations on behalf of the processing unit. An instruction set architecture (ISA) of the PIM module has fewer instructions than an ISA of the processing unit. Instructions received from the processing unit are translated such that processing resources of the PIM module are virtualized. As a result, the PIM module concurrently performs processing operations for multiple threads or applications of the processing unit.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: October 11, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Muhammad Amber Hassaan, Michael L. Chu, Ashwin Aji
  • Patent number: 11449338
    Abstract: A multi-tile processing system has a plurality of tiles each having an execution unit, and an interconnect operable to conduct communications between a group of the tiles according to a bulk synchronous parallel scheme. The execution unit is operable to execute instructions of an instruction set which has a synchronisation instruction for execution by each tile upon completion of its compute phase. The execution of the synchronisation instruction depends on the state of an exception enable flag. In one state, the synchronisation instruction causes the execution unit to send the synchronisation request to hardware logic in the interconnect. In another state of the exception enable flag the synchronisation instruction does not send the synchronisation request, but sets an exception events status to permit interrogation access to the tile. A corresponding method of controlling the debug states of the processing system is provided.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: September 20, 2022
    Assignee: Graphcore Limited
    Inventors: Alan Graham Alexander, Matthew David Fyles
  • Patent number: 11443034
    Abstract: A trust zone-based operating system including a secure world subsystem that runs a trusted execution environment TEE, a TEE monitoring area, and a security switching apparatus is provided. When receiving a sensitive operation request sent by a trusted application TA in the TEE, the TEE writes a sensitive instruction identifier and an operation parameter of the sensitive operation request into a general-purpose register, and sends a switching request to the security switching apparatus. The security switching apparatus receives the switching request, and switches a running environment of the secure world subsystem from the TEE to the TEE monitoring area. The TEE monitoring area stores a sensitive instruction in the operating system.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: September 13, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Wenhao Li, Yubin Xia, Haibo Chen
  • Patent number: 11422944
    Abstract: Examples herein relate to a system that includes a first memory device; a second memory device; and an input-output memory management unit (IOMMU). The IOMMU can search for a virtual-to-physical address translation entry in a first table for a received virtual address and based on a virtual-to-physical address translation entry for the received virtual address not being present in the first table, search a second table for a virtual-to-physical address translation entry for the received virtual address, wherein the first table is stored in the first memory device and the second table is stored in the second memory device. In some examples, based on a virtual-to-physical address translation entry for the received virtual address not being present in the second table, a page table walk is performed to determine a virtual-to-physical address translation for the received virtual address. In some examples, the first table includes an IO translation lookaside buffer (IOTLB).
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: August 23, 2022
    Assignee: Intel Corporation
    Inventors: Kaijie Guo, Weigang Li, Junyuan Wang, Liang Ma, Maksim Lukoshkov, Yao Huo
  • Patent number: 11418597
    Abstract: A system, method, and non-transitory computer-readable medium allows for value-anticipating task offloading. The system may include one or more processors and a memory having a task manager module. The task manager module causes the one or more processors to receive a task identifier of a computational task for an application being utilized by a vehicle processor of a vehicle and a state vector describing at least one state of the vehicle and determine, using a utility function, a utility score of the computational task using the task identifier and the state vector which represents an improvement in a functioning of the application if the computational task is offloaded to an external system for processing. Based on the utility score, the one or more processors may offload the computational task to the external system, process the computational task by the vehicle processor of the vehicle, or discard the computational task.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: August 16, 2022
    Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventors: Takamasa Higuchi, Seyhan Ucar, Chang-Heng Wang, Onur Altintas
  • Patent number: 11354769
    Abstract: One embodiment provides for a parallel processor comprising a processing array within the parallel processor, the processing array including multiple compute blocks, each compute block including multiple processing clusters configured for parallel operation, wherein each of the multiple compute blocks is independently preemptable. In one embodiment a preemption hint can be generated for source code during compilation to enable a compute unit to determine an efficient point for preemption.
    Type: Grant
    Filed: July 9, 2020
    Date of Patent: June 7, 2022
    Assignee: Intel Corporation
    Inventors: Altug Koker, Ingo Wald, David Puffer, Subramaniam M. Maiyuran, Prasoonkumar Surti, Balaji Vembu, Guei-Yuan Lueh, Murali Ramadoss, Abhishek R. Appu, Joydeep Ray
  • Patent number: 11340908
    Abstract: A pipelined computer processor is presented that reduces data hazards such that high processor utilization is attained. The processor restructures a set of instructions to operate concurrently on multiple pieces of data in multiple passes. One subset of instructions operates on one piece of data while different subsets of instructions operate concurrently on different pieces of data. A validity pipeline tracks the priming and draining of the pipeline processor to ensure that only valid data is written to registers or memory. Pass-dependent addressing is provided to correctly address registers and memory for different pieces of data.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: May 24, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Neal Andrew Crook, Alan T. Wootton, James Peterson
  • Patent number: 11322171
    Abstract: A system and method for processing a plurality of channels, for example audio channels, in parallel is provided. For example, a plurality of telephony channels are processed in order to detect and respond to call progress tones. The channels may be processed according to a common transform algorithm. Advantageously, a massively parallel architecture is employed, in which operations on many channels are synchronized, to achieve a high efficiency parallel processing environment. The parallel processor may be situated on a data bus, separate from a main general-purpose processor, or integrated with the processor in a common board or integrated device. All, or a portion of a speech processing algorithm may also be performed in a massively parallel manner.
    Type: Grant
    Filed: October 12, 2020
    Date of Patent: May 3, 2022
    Inventor: Wai Wu
  • Patent number: 11281493
    Abstract: A task manager tightly coupled to a programmable real-time unit (PRU), the task manager configured to: detect a first event; assert, a request to the PRU during a first clock cycle that the PRU perform a second task; receive an acknowledgement of the request from the PRU during the first clock cycle; save a first address in a memory during the first clock cycle of the PRU, the first address corresponding to a first task of the PRU, the first address present in a current program counter of the PRU; load a second address of the memory into a second program counter during the first clock cycle, the second address corresponding to the second task; and load, during a second clock cycle, the second address into the current program counter, wherein the second clock cycle immediately follows the first clock cycle.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: March 22, 2022
    Assignee: Texas Instruments Incorporated
    Inventors: Thomas Anton Leyrer, William Cronin Wallace
  • Patent number: 11275661
    Abstract: A method of generating instructions to be executed by a plurality of execution engines that shares a resource is provided. The method comprises, in a first generation step: reading a first engine logical timestamp vector of a first execution engine of the execution engines, the logical timestamp representing a history of access operations for the resource; determining whether the first engine logical timestamp vector includes a most-up-to-date logical timestamp of the resource in the first generation step; based on the first engine logical timestamp vector including the most-up-to-date logical timestamp of the resource in the first generation step, generating an access instruction to be executed by the first execution engine to access the resource; and scheduling the first execution engine to execute the access instruction.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: March 15, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Dana Michelle Vantrease, Ron Diamant
  • Patent number: 11194503
    Abstract: A storage device includes: a storage controller to receive one or more notifications corresponding to host data transferred from a host device to the storage device over a storage interface; and a response circuit connected to the storage controller, the response circuit to trigger a response to the host device, and including: a first counter to track the one or more notifications, the one or more notifications corresponding to an entirety of the host data such that each of the notifications corresponds to a portion of the host data; a second counter to track one or more acknowledgements received from the storage controller, the one or more acknowledgments corresponding to the one or more notifications such that each of the acknowledgments corresponds to a notification; and a response trigger to select one of the first counter and the second counter to trigger the response to the host device.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: December 7, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Chase Pasquale, Richard N. Deglin, Vishal Jain, Jagannath Vishnuteja Desai
  • Patent number: 11182215
    Abstract: A method for graphics processing, wherein a graphics processing unit (GPU) resource is allocated among applications, such that each application is allocated a set of time slices. Commands of draw calls are loaded to rendering command buffers in order to render an image frame for a first application. The commands are processed by the GPU resource within a first time slice allocated to the first application. The method including determining at least one command has not been executed at an end of the first time slice. The method including halting execution of commands, wherein remaining one or more commands are not processed in the first time slice. A GPU configuration is preserved for the commands after processing a last executed command, the GPU configuration used when processing in a second time slice the remaining commands.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: November 23, 2021
    Assignee: Sony Interactive Entertainment LLC
    Inventor: Mark E. Cerny
  • Patent number: 11163597
    Abstract: A computing fabric includes one or more host computing platforms and a plurality of partitions instantiated across the one or more host computing platforms, each of the plurality of partitions allocated computing resources of the one or more host computing platforms. The computing fabric further includes a hypervisor installed on the one or more host computing platforms and managing interactions among the plurality of partitions. The plurality of partitions includes a persistent partition to which one or more storage devices are allocated, the persistent partition executing software loaded from a trusted storage location and executing from a non-volatile memory.
    Type: Grant
    Filed: January 20, 2016
    Date of Patent: November 2, 2021
    Assignee: Unisys Corporation
    Inventors: Robert J Sliwa, Bryan E Thompson, James R Hunter, John A Landis, David A Kershner
  • Patent number: 11093270
    Abstract: A method and apparatus for configuring an overlay network are provided. In the method and apparatus, an application source comprising an executable portion is obtained. A computer system instance is caused to execute at least some of the executable portion, and a snapshot of the computer system instance after partial but incomplete execution of the executable portion is obtained such that the snapshot is usable to instantiate another computer system instance to continue execution of the executable portion from a point in execution at which the snapshot was obtained.
    Type: Grant
    Filed: September 6, 2017
    Date of Patent: August 17, 2021
    Assignee: Amazon Technologies, Inc.
    Inventor: Nicholas Alexander Allen
  • Patent number: 11080095
    Abstract: Systems and methods are disclosed for scheduling threads on a processor that has at least two different core types, such as an asymmetric multiprocessing system. Each core type can run at a plurality of selectable voltage and frequency scaling (DVFS) states. Threads from a plurality of processes can be grouped into thread groups. Execution metrics are accumulated for threads of a thread group and fed into a plurality of tunable controllers for the thread group. A closed loop performance control (CLPC) system determines a control effort for the thread group and maps the control effort to a recommended core type and DVFS state. A closed loop thermal and power management system can limit the control effort determined by the CLPC for a thread group, and limit the power, core type, and DVFS states for the system. Deferred interrupts can be used to increase performance.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: August 3, 2021
    Assignee: Apple Inc.
    Inventors: Jeremy C. Andrus, John G. Dorsey, James M. Magee, Daniel A. Chimene, Cyril de la Cropte de Chanterac, Bryan R. Hinch, Aditya Venkataraman, Andrei Dorofeev, Nigel R. Gamble, Russell A. Blaine, Constantin Pistol
  • Patent number: 11061503
    Abstract: In one embodiment, an apparatus and associated method are provided, comprising: at a device having a display and a touch-sensitive surface: displaying a first user interface of a first application on the display; while displaying the first user interface of the first application on the display, detecting an input by a first contact; and in response to detecting the input by the first contact: in accordance with a determination that the input meets first one or more criteria, wherein the first one or more criteria require that the first movement meets a first directional condition in order for the first one or more criteria to be met, displaying a second user interface; and in accordance with a determination that the input meets second one or more criteria, wherein the second one or more criteria require that the first movement meets a second directional condition that is distinct from the first directional condition in order for the second one or more criteria to be met, displaying a home screen user interfac
    Type: Grant
    Filed: December 21, 2019
    Date of Patent: July 13, 2021
    Assignee: P4TENTS1, LLC
    Inventor: Michael S Smith
  • Patent number: 10963310
    Abstract: Computer program products and a system for managing processing resource usage at a workload manager and an application are described. The workload manager and application may utilize safe stop points to reduce processing resource usage during high cost processing periods while preventing contention in the processing resources. The workload manager and application may also implement lazy resumes or processing resource utilization at the application to allow for continued reduced usage of the processing resources.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: March 30, 2021
    Assignee: International Business Machines Corporation
    Inventors: Nigel G. Slinger, Matt Helliwell, Wen Jie Zhu
  • Patent number: 10949236
    Abstract: A method and apparatus for configuring an overlay network are provided. In the method and apparatus, an application source comprising an executable portion is obtained. A computer system instance is caused to execute at least some of the executable portion, and a snapshot of the computer system instance after partial but incomplete execution of the executable portion is obtained such that the snapshot is usable to instantiate another computer system instance to continue execution of the executable portion from a point in execution at which the snapshot was obtained.
    Type: Grant
    Filed: September 6, 2017
    Date of Patent: March 16, 2021
    Assignee: Amazon Technologies, Inc.
    Inventor: Nicholas Alexander Allen
  • Patent number: 10938831
    Abstract: An information handling system includes a service master and a command router. The service master is configured to host one or more service threads running under different access levels. The command router is configured to receive a request for a service from an application, the request including an access control token, determine the access control token matches the service and an access level corresponding to the access control token, and route the request to a service thread matching the access level of the access control token.
    Type: Grant
    Filed: June 13, 2018
    Date of Patent: March 2, 2021
    Assignee: Dell Products, L.P.
    Inventors: Abu Shaher Sanaullah, Danilo O. Tan, Srikanth Kondapi
  • Patent number: 10922135
    Abstract: A method is disclosed for dynamic multitasking in a storage system, the storage system including a first storage server configured to execute a first I/O service process and one or more second storage servers, the method comprising: detecting a first event for triggering a context switch; transmitting to each of the second storage servers an instruction to stop transmitting internal I/O requests to the first I/O service process, the instruction including an identifier corresponding to the first I/O service process, the identifier being arranged to distinguish the first I/O service process from other first I/O service processes that are executed by the first storage server concurrently with the first I/O service process; deactivating the first I/O service process by pausing a frontend of the first I/O service process, and pausing one or more I/O providers of the first I/O service process; and executing a first context switch between the first I/O service process and a second process.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: February 16, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Lior Kamran, Amitai Alkalay, Zvi Schneider
  • Patent number: 10901794
    Abstract: Provided is a control unit of an automation system for determining the execution time of a user program, including a first time-determining unit, wherein the first time-determining unit determines the execution time for the control unit and/or another control unit in a first operating mode, wherein at least one boundary condition is taken into account in the determination of the execution time, and wherein statistical data about the running time of commands of the user program of the control unit or of a linear representation of the real time of the control unit are taken into account in the determination of the execution time. A corresponding method and to a computer program product is also provided.
    Type: Grant
    Filed: June 12, 2017
    Date of Patent: January 26, 2021
    Assignee: SIEMENS AKTIENGESELLSCHAFT
    Inventors: Rene Ermler, Cornelia Krebs, Jörg Neidig, Gustavo Arturo Quiros Araya
  • Patent number: 10871982
    Abstract: Systems and methods for scheduling virtual processors via memory monitoring are disclosed. An example method comprises: detecting, by a hypervisor of a host computer system, an event associated with a virtual processor running on a physical processor of the host computer system; testing a polling flag residing in a memory accessible by guest software running on the virtual processor, wherein a first state of the polling flag indicates that the virtual processor is monitoring modifications to a memory region comprising a waiting task flag, and wherein the waiting task flag indicates whether a task has been queued for the virtual processor; setting the polling flag to a second state, wherein testing the polling flag and setting the polling flag to the second state is performed in an atomic operation; and processing the detected event.
    Type: Grant
    Filed: July 17, 2018
    Date of Patent: December 22, 2020
    Assignee: Red Hat, Inc.
    Inventor: Michael Tsirkin
  • Patent number: 10853123
    Abstract: The access control circuit writes to the first storage unit a context information transmitted in one cycle from the CPU through the first bus, a context number identifying the context information, and a link context number identifying the context information transmitted from the CPU prior to the interrupt when the request for evacuating the task context information is received by the interrupt. After writing to the first storage unit, the access control circuit transfers the data including the context information and the link context number stored in the first storage unit to the second storage unit in a plurality of cycles through the internal bus (second bus) in association with the context number stored in the first storage unit.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: December 1, 2020
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventor: Tatsuhiro Tachibana
  • Patent number: 10827304
    Abstract: A method for safely and efficiently requesting transportation services through the use of mobile communications devices capable of geographic location is described. Individual and package transportation may be provided. New customers may be efficiently serviced, and the requester and transportation provider locations may be viewed in real time on the mobile devices.
    Type: Grant
    Filed: June 1, 2018
    Date of Patent: November 3, 2020
    Assignee: LYFT, INC.
    Inventors: Olaf Martin Lubeck, John H. Hall
  • Patent number: 10795872
    Abstract: A method comprising: processing an update to a search tree and updating statistics, the search tree storing information about one or more objects indexed by corresponding object keys; determining to rebuild a first Bloom filter based on the statistics, the first Bloom filter associated with the search tree; generating a second Bloom filter associated with the search tree; populating the second Bloom filter as part of a tracing garbage collection process; and replacing the first Bloom filter with the second Bloom filter.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: October 6, 2020
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Mikhail Danilov, Mikhail Malygin, Ivan Tchoub, Alexander Fedorov, Nikita Gutsalov
  • Patent number: 10733031
    Abstract: An information processing apparatus is provided including a first operating system incapable of adding or deleting an application and a second operating system capable of adding and deleting an application; and determines whether a received command is a command directed to the first operating system or a command directed to the second operating system by referencing a table in which the command and an operating system for processing the command are associated with each other; retains the table; controls a memory so that the first operating system or the second operating system can start processing based on a result of the determining by the means for determining; and transfers the received command to the first operating system or the second operating system based on the result of the determining.
    Type: Grant
    Filed: September 28, 2017
    Date of Patent: August 4, 2020
    Assignee: Sony Corporation
    Inventor: Yasuo Takeuchi
  • Patent number: 10514958
    Abstract: A device, that provides serverless computing, receives a request to execute multiple jobs, and determines criteria for each of the plurality of jobs, wherein the criteria for each of the multiple jobs includes at least one of job posting criteria, job validation criteria, job retry criteria, or a disaster recovery criteria. The device stores information associated with the multiple jobs in a repository, wherein the information associated with the multiple jobs includes the criteria for each of the multiple jobs. The device provides a particular job, of the multiple jobs, to a cluster computing framework for execution, determines modified criteria for the particular job, and provides the modified criteria for the particular job to the cluster computing framework. The device receives, from the cluster computing framework, information indicating that execution of the particular job is complete, and validates a success of completion of the execution of the particular job.
    Type: Grant
    Filed: February 14, 2018
    Date of Patent: December 24, 2019
    Assignee: Capital One Services, LLC
    Inventors: Ashwini Kumar, Lakshmi Narasimha Sarma Kattamuri
  • Patent number: 10489296
    Abstract: Embodiments here relate to managing a cache by exploiting a cache line hierarchy is provided. Managing the cache includes reading cache references of a first task from a cache reference save area of a first task data structure in response to a context switch. Further, managing the cache includes prefetching and restoring cache lines of the first task to the cache based on the cache references. Note that the cache lines can be predetermined from a plurality of cache lines associated with the first task during an extraction operation with respect to the first task and the cache line hierarchy.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: November 26, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Wolfgang Gellerich, Peter M. Held, Gerrit Koch, Christoph Raisch, Martin Schwidefsky
  • Patent number: 10437599
    Abstract: A processor that reduces pipeline stall including a front end, a load queue, a scheduler, and a load buffer. The front end issues instructions while a first full indication is not provided, but otherwise stalls issuing instructions. The load queue stores issued load instruction entries including information needed to execute the issued load instruction. The load queue provides a second full indication when full. The scheduler dispatches issued instructions for execution except for stalled load instructions, such as when not yet been stored in the load queue. The load buffer transfers issued load instructions to the load queue when the load queue is not full. When the load queue is full, the load buffer temporarily buffers issued load instructions until the load queue is no longer full. The load buffer allows more accurate load queue full determination, and allows processing to continue even when the load queue is full.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: October 8, 2019
    Assignee: SHANGHAI ZHAOXIN SEMICONDUCTOR CO., LTD.
    Inventor: Qianli Di
  • Patent number: 10402221
    Abstract: A device, such as a constrained device that includes a processing device and memory, schedules user-defined independently executable functions to execute from a single stack common to all user-defined independently executable functions according to availability and priority of the user-defined independently executable functions relative to other user-defined independently executable functions and preempts currently running user-defined independently executable function by placing the particular user-defined independently executable function on a single stack that has register values for the currently running user-defined independently executable function.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: September 3, 2019
    Assignee: TYCO FIRE & SECURITY GMBH
    Inventors: Vincent J. Lipsio, Jr., Paul B. Rasband