Patents by Inventor Christopher Peter Kleynhans
Christopher Peter Kleynhans has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240111573Abstract: Systems and methods for providing cross-partition preemption analysis and prevention. Computing devices typically include a main central processing unit (CPU) with multiple cores to execute instructions independently, cooperatively, or in other suitable manners. In some examples, one or more cores are partitioned and dedicated to a particular application, where exclusive access of the cores in the partition is intended for running processes of the application. In some examples, some “noise” can be introduced in a partition, where preemptions associated with other processes can interrupt execution of the particular application. A preemption diagnostics system and method identify and prevent sources of cross-partition preemption events from running in a dedicated CPU partition. Thus, the particular application has dedicated use of the cores in the partition. As a result, latency of the application is reduced and bounded latency corresponding to a service level agreement can be achieved.Type: ApplicationFiled: September 29, 2022Publication date: April 4, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Omar CARDONA, Matthew WOOLMAN, Giovanni PITTALIS, Dmitry MALLOY, Christopher Peter KLEYNHANS
-
Patent number: 11940860Abstract: Systems and methods for managing a power budget are provided. The method includes designating, by a power budget manager implemented on at least one processor, each of one or more applications with an individual quality of service (QoS) designation, the one or more applications executable by the at least one processor, assigning, by the power budget manager, a throttling priority to each of the one or more applications based on the individual QoS designations, determining, by the power budget manager, whether a platform mitigation threshold is exceeded, and responsive to determining that the platform mitigation threshold is exceeded, throttling, by the power budget manager, processing power allocated to at least one application of the one or more applications based on the throttling prioritization.Type: GrantFiled: June 20, 2022Date of Patent: March 26, 2024Assignee: Microsoft Technology Licensing, LLC.Inventors: Sandeep Prabhakar, Mark Allan Bellon, Mika Megan Latimer, Tristan Anthony Brown, Christopher Peter Kleynhans, Rahul Narayanan Nair
-
Publication number: 20220404888Abstract: Systems and methods for managing a power budget are provided. The method includes designating, by a power budget manager implemented on at least one processor, each of one or more applications with an individual quality of service (QoS) designation, the one or more applications executable by the at least one processor, assigning, by the power budget manager, a throttling priority to each of the one or more applications based on the individual QoS designations, determining, by the power budget manager, whether a platform mitigation threshold is exceeded, and responsive to determining that the platform mitigation threshold is exceeded, throttling, by the power budget manager, processing power allocated to at least one application of the one or more applications based on the throttling prioritization.Type: ApplicationFiled: June 20, 2022Publication date: December 22, 2022Inventors: Sandeep PRABHAKAR, Mark Allan BELLON, Mika Megan LATIMER, Tristan Anthony BROWN, Christopher Peter KLEYNHANS, Rahul NARAYANAN NAIR
-
Publication number: 20210109795Abstract: Described herein is a system and method for latency-aware thread scheduled. For each processor core, an estimated cost to schedule a particular thread on the processor core is calculated. The estimated cost to schedule can be a period of time between the scheduling decision and the point in time where the scheduled thread begins to run. For each processor core, an estimated cost to execute the particular thread on the processor core is calculated. The estimated cost to execute can be a period of time spent actually running the particular thread on a particular processor core. A determination as to which processor core to utilize for execution of the particular thread based, at least in part, upon the calculated estimated costs to schedule the particular thread and/or the calculated estimated costs to execute the particular thread. The particular thread can be scheduled to execute on the determined processor core.Type: ApplicationFiled: October 11, 2019Publication date: April 15, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Gregory John COLOMBO, Rahul NAIR, Mark Allan BELLON, Christopher Peter KLEYNHANS, Jason LIN, Ojasvi CHOUDHARY, Tristan Anthony BROWN
-
Patent number: 10929167Abstract: Communicating a low-latency event across a virtual machine boundary. Based on an event signaling request by a first process running at a first virtual machine, the first virtual machine updates a shared register that is accessible by a second virtual machine. Updating the shared register includes updating a signal stored in the shared register. The first virtual machine sends an event signal message, which includes a register identifier, through a virtualization fabric to the second virtual machine. The second virtual machine receives the event signaling message and identifies the register identifier from the message. Based on the register identifier, the second virtual machine reads the shared register, identifying a value of the signal stored in the shared register. Based at least on the value of the signal comprising a first value, the second virtual machine signals a second process running at the second virtual machine.Type: GrantFiled: January 9, 2019Date of Patent: February 23, 2021Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Jason Lin, Gregory John Colombo, Mehmet Iyigun, Yevgeniy Bak, Christopher Peter Kleynhans, Stephen Louis-Essman Hufnagel, Michael Ebersol, Ahmed Saruhan Karademir, Shawn Michael Denbow, Kevin Broas, Wen Jia Liu
-
Publication number: 20200218560Abstract: Communicating a low-latency event across a virtual machine boundary. Based on an event signaling request by a first process running at a first virtual machine, the first virtual machine updates a shared register that is accessible by a second virtual machine. Updating the shared register includes updating a signal stored in the shared register. The first virtual machine sends an event signal message, which includes a register identifier, through a virtualization fabric to the second virtual machine. The second virtual machine receives the event signaling message and identifies the register identifier from the message. Based on the register identifier, the second virtual machine reads the shared register, identifying a value of the signal stored in the shared register. Based at least on the value of the signal comprising a first value, the second virtual machine signals a second process running at the second virtual machine.Type: ApplicationFiled: January 9, 2019Publication date: July 9, 2020Inventors: Jason LIN, Gregory John COLOMBO, Mehmet IYIGUN, Yevgeniy BAK, Christopher Peter KLEYNHANS, Stephen Louis-Essman HUFNAGEL, Michael EBERSOL, Ahmed Saruhan KARADEMIR, Shawn Michael DENBOW, Kevin BROAS, Wen Jia LIU
-
Patent number: 10579417Abstract: The threads of a user mode process can access various different resources of a computing device, and such access can be serialized. To access a serialized resource, a thread acquires a lock for the resource. For each context switch in the computing device, a module of the operating system kernel checks for priority inversions, which is a situation in which a higher priority thread of the user mode process is waiting for (blocking on) a resource for which a lower priority thread has acquired a lock. In response to detecting such a priority inversion, the priority of the lower priority thread is boosted to allow the priority thread to execute and eventually release the lock that the higher priority thread is waiting for.Type: GrantFiled: April 26, 2017Date of Patent: March 3, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Yevgeniy M. Bak, Mehmet Iyigun, Christopher Peter Kleynhans, Syed A. Raza
-
Publication number: 20180314547Abstract: The threads of a user mode process can access various different resources of a computing device, and such access can be serialized. To access a serialized resource, a thread acquires a lock for the resource. For each context switch in the computing device, a module of the operating system kernel checks for priority inversions, which is a situation in which a higher priority thread of the user mode process is waiting for (blocking on) a resource for which a lower priority thread has acquired a lock. In response to detecting such a priority inversion, the priority of the lower priority thread is boosted to allow the priority thread to execute and eventually release the lock that the higher priority thread is waiting for.Type: ApplicationFiled: April 26, 2017Publication date: November 1, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Yevgeniy M. Bak, Mehmet Iyigun, Christopher Peter Kleynhans, Syed A. Raza
-
Publication number: 20170279678Abstract: Configuring a node. A method includes at a first configuration layer, modifying configuration settings. The method further includes propagating the modified configuration settings to one or more other configuration layers implemented at the first configuration layer to configure a node.Type: ApplicationFiled: March 28, 2016Publication date: September 28, 2017Inventors: Christopher Peter Kleynhans, Eric Wesley Wohllaib, Paul McAlpin Bozzay, Morakinyo Korede Olugbade, Frederick J. Smith, Benjamin M. Schultz, Gregory John Colombo, Hari R. Pulapaka, Mehmet Iyigun
-
Publication number: 20160110283Abstract: Disclosed are techniques and systems for providing on-demand expansion of a non-cache-aware synchronization primitive to a cache-aware form. The expansion may occur on-demand when it becomes necessary to do so for performance and throughput purposes. Expansion of the synchronization primitive may be based at least in part on a level of cache-line contention resulting from operations on the non-cache-aware synchronization primitive. The synchronization primitive in the expanded (cache-aware) form may be represented by a data structure that allocates individual cache lines to respective processors of a multiprocessor system in which the synchronization primitive is implemented. Once expanded, the cache-aware synchronization primitive may be contracted to its non-cache-aware form.Type: ApplicationFiled: October 20, 2014Publication date: April 21, 2016Inventors: Mehmet Iyigun, Yevgeniy Bak, Christopher Peter Kleynhans, Syed Aunn Hasan Raza, Thomas James Ootjers, Neeraj Kumar Singh