Patents by Inventor Ulrich Drepper
Ulrich Drepper has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11960420Abstract: Systems and methods for direct memory control operations on memory data structures. In one implementation, a processing device receives, from a component of an application runtime environment, a request to perform a memory access operation on a portion of a memory space; determines a data structure address for a portion of a memory data structure, wherein the portion of the data structure is associated with the portion of the memory space; and performs, in view of the data structure address, the memory access operation directly on the portion of the memory data structure.Type: GrantFiled: February 16, 2021Date of Patent: April 16, 2024Assignee: Red Hat, Inc.Inventor: Ulrich Drepper
-
Patent number: 11960898Abstract: The technology disclosed herein enables a processor that processes instructions synchronously in accordance with a processor clock to identify a first instruction specifying an asynchronous operation to be processed independently of the processor clock. The asynchronous operation is performed by an asynchronous execution unit that executes the asynchronous operation independently of the processor clock and generates at least one result of the asynchronous operation. A synchronous execution unit executes, in parallel with the execution of the asynchronous operation by the asynchronous execution unit, one or more second instructions specifying respective synchronous operations. Responsive to determining that the asynchronous execution unit has generated the at least one result of the asynchronous operation, the processor receives the at least one result of the asynchronous operation.Type: GrantFiled: December 18, 2020Date of Patent: April 16, 2024Assignee: Red Hat, Inc.Inventor: Ulrich Drepper
-
Patent number: 11941104Abstract: In one embodiment, a system receives an exec function invocation request from a second application to run a first application from an executable file. In response to receiving the exec function invocation request, the system determines a working directory associated with the second application. The system determines one or more extended attribute values associated with the working directory. The system determines, in view of the one or more extended attribute values, whether to allow or deny the first application to use the working directory to run the executable file of the first application.Type: GrantFiled: December 3, 2020Date of Patent: March 26, 2024Assignee: Red Hat, Inc.Inventor: Ulrich Drepper
-
Patent number: 11941382Abstract: Disclosed herein is technology to use customized compiler attributes to check source code. An example method may include: accessing, by a processing device executing a compiler, a source code that comprises a compiler attribute associated with a programming construct, wherein the compiler attribute is defined in the source code; executing, by the processing device, a function of the compiler to check the programming construct at a location in the source code, wherein the function checks the programming construct by evaluating the compiler attribute associated with the programming construct; determining, by the processing device executing the compiler, whether to generate a message indicating a status of the check; and generating, by the processing device executing the compiler, object code based on the source code that comprises the compiler attribute.Type: GrantFiled: February 7, 2022Date of Patent: March 26, 2024Assignee: Red Hat, Inc.Inventor: Ulrich Drepper
-
Publication number: 20240086359Abstract: A method includes identifying, by a processor, one or more first arithmetic logic unit (ALU) operations in a first ALU operation queue, wherein the first ALU operations are associated with a first requested vector length and at least one first input vector; identifying, by the processor, one or more second ALU operations in a second ALU operation queue, wherein the second ALU operations are associated with a second requested vector length and at least one second input vector, wherein the processor comprises a vector logic unit, and the vector logic unit comprises a set of ALUs; determining a first subset of the set of ALUs and a second subset of the set of ALUs, in view of the first requested vector length, the second requested vector length, and one or more allocation criteria, wherein the first subset includes a first number of ALUs of the vector logic unit, and wherein the second subset includes a second number of ALUs of the vector logic unit; identifying one or more first identified operations from the fType: ApplicationFiled: November 13, 2023Publication date: March 14, 2024Inventor: Ulrich Drepper
-
Patent number: 11928447Abstract: Systems and methods for configuration management through information and code injection at compile time. An example method comprises: receiving a source code comprising one or more references to a variable; receiving metadata associated with the source code, wherein the metadata specifies a range of values of the variable; and identifying, in view of the range of values of the variable, a reachable section of the source code.Type: GrantFiled: July 26, 2021Date of Patent: March 12, 2024Assignee: Red Hat, Inc.Inventor: Ulrich Drepper
-
Patent number: 11816061Abstract: A system includes a processing device that includes a vector arithmetic logic unit comprising a plurality of arithmetic logic units (ALUs), and a first processor core operatively coupled to the vector arithmetic logic unit, the processing device to receive a first vector instruction from the first processor core, wherein the first vector instruction specifies at least one first input vector having a first vector length, identify a first subset of the ALUs in view of the first vector length and one or more allocation criteria, execute, using the first subset of the set of ALUs, one or more first ALU operations specified by the first vector instruction, wherein the vector arithmetic logic unit executes the first ALU operations in parallel with one or more second ALU operations specified by a second vector instruction received from a second processor core.Type: GrantFiled: December 18, 2020Date of Patent: November 14, 2023Assignee: Red Hat, Inc.Inventor: Ulrich Drepper
-
Publication number: 20230325161Abstract: The technology disclosed herein enables a processing device to receive program code comprising a plurality of program code instructions, wherein the plurality of program code instructions comprise at least one profiling instruction, wherein the program code is loaded from an executable program file that specifies a persistent memory region; and execute the program code including the at least one profiling instruction, wherein to execute the at least one profiling instruction, the processing device is to: identify a memory location in the persistent memory region in view of a profiling identifier included in the at least one profiling instruction, generate a profiling information item, and store the profiling information item in the persistent memory region at the identified memory location.Type: ApplicationFiled: June 2, 2023Publication date: October 12, 2023Inventor: Ulrich Drepper
-
Publication number: 20230251838Abstract: Disclosed herein is technology to use customized compiler attributes to enhance code optimizations. An example method may include: accessing, by a processing device executing a compiler, a source code comprising a compiler attribute, wherein the compiler attribute is defined by text in the source code; detecting, by the processing device, a call in the source code to a compiler function that processes the compiler attribute, wherein the source code includes logic to process the compiler attribute; executing, by the processing device, the compiler function at compile time, wherein the compiler function uses the logic to process the compiler attribute and generates attribute data; and using, by the compiler, the attribute data to optimize object code generated from the source code.Type: ApplicationFiled: February 7, 2022Publication date: August 10, 2023Inventor: Ulrich Drepper
-
Publication number: 20230251837Abstract: Disclosed herein is technology to use customized compiler attributes to check source code. An example method may include: accessing, by a processing device executing a compiler, a source code that comprises a compiler attribute associated with a programming construct, wherein the compiler attribute is defined in the source code; executing, by the processing device, a function of the compiler to check the programming construct at a location in the source code, wherein the function checks the programming construct by evaluating the compiler attribute associated with the programming construct; determining, by the processing device executing the compiler, whether to generate a message indicating a status of the check; and generating, by the processing device executing the compiler, object code based on the source code that comprises the compiler attribute.Type: ApplicationFiled: February 7, 2022Publication date: August 10, 2023Inventor: Ulrich Drepper
-
Publication number: 20230205576Abstract: A first processing device receives a request to execute a program, determines that an amount of memory modified by one or more instructions of the program satisfies a threshold criterion, and provides the one or more instructions to a second processing device for execution, wherein the second processing device implements an instruction set architecture of the first processing device.Type: ApplicationFiled: February 27, 2023Publication date: June 29, 2023Inventor: Ulrich Drepper
-
Patent number: 11669312Abstract: The technology disclosed herein enables a processor to receive program code comprising a plurality of program code instructions generated by a compiler in view of source code, identify, among the program code instructions, one or more optimizable instructions, wherein at least one of the optimizable instructions is associated with an execution characteristic, and the execution characteristic is associated with an optimization decision, identify a profiling instruction location associated with the at least one of the optimizable instructions, and add a profiling instruction to the program code at the profiling instruction location. The at least one profiling instruction comprises a profiling identifier, and causes the processing device to: generate a profiling information item in view of the execution characteristic of the optimizable instructions, and store the profiling information item in a persistent memory region at a memory location corresponding to the profiling identifier.Type: GrantFiled: April 27, 2021Date of Patent: June 6, 2023Assignee: Red Hat, Inc.Inventor: Ulrich Drepper
-
Patent number: 11593156Abstract: An instruction offload manager receives, by a processing device, a first request to execute a program, identifies one or more instructions of the program to be offloaded to a second processing device, where the second processing device includes a same instruction set architecture as the processing device, and provides the one or more instructions to a memory module comprising the second processing device. Responsive to detecting an indication to execute the one or more instructions, the instruction offload manager provides an indication to the second processing device to cause the second processing device to execute the one or more instructions, the one or more instructions to update a portion of a memory space associated with the memory module.Type: GrantFiled: August 16, 2019Date of Patent: February 28, 2023Assignee: Red Hat, Inc.Inventor: Ulrich Drepper
-
Publication number: 20220342647Abstract: The technology disclosed herein enables a processor to receive program code comprising a plurality of program code instructions generated by a compiler in view of source code, identify, among the program code instructions, one or more optimizable instructions, wherein at least one of the optimizable instructions is associated with an execution characteristic, and the execution characteristic is associated with an optimization decision, identify a profiling instruction location associated with the at least one of the optimizable instructions, and add a profiling instruction to the program code at the profiling instruction location. The at least one profiling instruction comprises a profiling identifier, and causes the processing device to: generate a profiling information item in view of the execution characteristic of the optimizable instructions, and store the profiling information item in a persistent memory region at a memory location corresponding to the profiling identifier.Type: ApplicationFiled: April 27, 2021Publication date: October 27, 2022Inventor: Ulrich Drepper
-
Patent number: 11449452Abstract: An apparatus includes multiple computing cores, where each computing core is configured to perform one or more processing operations and generate input data. The apparatus also includes multiple coprocessors associated with each computing core, where each coprocessor is configured to receive the input data from at least one of the computing cores, process the input data, and generate output data. The apparatus further includes multiple reducer circuits, where each reducer circuit is configured to receive the output data from each of the coprocessors of an associated computing core, apply one or more functions to the output data, and provide one or more results to the associated computing core. In addition, the apparatus includes multiple communication links communicatively coupling the computing cores and the coprocessors associated with the computing cores.Type: GrantFiled: April 6, 2017Date of Patent: September 20, 2022Assignee: Goldman Sachs & Co. LLCInventors: Paul Burchard, Ulrich Drepper
-
Publication number: 20220269637Abstract: An apparatus includes multiple parallel computing cores and multiple parallel coprocessor/reducer cores associated with each computing core. Each computing core is configured to perform one or more processing operations, generate input data, and provide the input data to designated coprocessor/reducer cores associated with at least some of the computing cores. Each coprocessor/reducer core associated with a respective computing core is configured to generate output data. Some of the coprocessor/reducer cores associated with the respective computing core are configured to perform part of a distributed operation using the output data to generate intermediate results. A designated one of the coprocessor/reducer cores associated with the respective computing core is configured to provide one or more final results to the computing core.Type: ApplicationFiled: May 11, 2022Publication date: August 25, 2022Inventors: Paul Burchard, Ulrich Drepper
-
Publication number: 20220261366Abstract: Systems and methods for direct memory control operations on memory data structures. In one implementation, a processing device receives, from a component of an application runtime environment, a request to perform a memory access operation on a portion of a memory space; determines a data structure address for a portion of a memory data structure, wherein the portion of the data structure is associated with the portion of the memory space; and performs, in view of the data structure address, the memory access operation directly on the portion of the memory data structure.Type: ApplicationFiled: February 16, 2021Publication date: August 18, 2022Inventor: Ulrich Drepper
-
Publication number: 20220197858Abstract: A system includes a processing device that includes a vector arithmetic logic unit comprising a plurality of arithmetic logic units (ALUs), and a first processor core operatively coupled to the vector arithmetic logic unit, the processing device to receive a first vector instruction from the first processor core, wherein the first vector instruction specifies at least one first input vector having a first vector length, identify a first subset of the ALUs in view of the first vector length and one or more allocation criteria, execute, using the first subset of the set of ALUs, one or more first ALU operations specified by the first vector instruction, wherein the vector arithmetic logic unit executes the first ALU operations in parallel with one or more second ALU operations specified by a second vector instruction received from a second processor core.Type: ApplicationFiled: December 18, 2020Publication date: June 23, 2022Inventor: Ulrich Drepper
-
Publication number: 20220197718Abstract: The technology disclosed herein enables a processor that processes instructions synchronously in accordance with a processor clock to identify a first instruction specifying an asynchronous operation to be processed independently of the processor clock. The asynchronous operation is performed by an asynchronous execution unit that executes the asynchronous operation independently of the processor clock and generates at least one result of the asynchronous operation. A synchronous execution unit executes, in parallel with the execution of the asynchronous operation by the asynchronous execution unit, one or more second instructions specifying respective synchronous operations. Responsive to determining that the asynchronous execution unit has generated the at least one result of the asynchronous operation, the processor receives the at least one result of the asynchronous operation.Type: ApplicationFiled: December 18, 2020Publication date: June 23, 2022Inventor: Ulrich Drepper
-
Publication number: 20220197616Abstract: Systems and methods for supporting a compilation framework for hardware configuration generation.Type: ApplicationFiled: December 18, 2020Publication date: June 23, 2022Inventors: Ulrich Drepper, Ahmed Sanaullah