Patents by Inventor Mohammad A. Abdallah

Mohammad A. Abdallah has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240102525
    Abstract: Methods, apparatus, systems, and articles of manufacture to extend brake life cycle are disclosed herein. An example apparatus disclosed herein includes machine readable instructions and processor circuitry to at least one of instantiate or execute the machine readable instructions to activate a brake motor of a brake of a vehicle to cause contact between a brake pad of the brake and a rotor of the brake, compare a power output of the brake motor of the brake to a threshold, and determine a material degradation level of the brake pad based on the comparison of the power output to the threshold.
    Type: Application
    Filed: December 8, 2023
    Publication date: March 28, 2024
    Inventors: Andrew Edward Toy, Jeremy Lerner, Taylor Hawley, Mohammad Abouali, Ali Abdallah
  • Patent number: 11656875
    Abstract: A method for emulating a guest centralized flag architecture by using a native distributed flag architecture. The method includes receiving an incoming instruction sequence using a global front end; grouping the instructions to form instruction blocks, wherein each of the instruction blocks comprise two half blocks; scheduling the instructions of the instruction block to execute in accordance with a scheduler; and using a distributed flag architecture to emulate a centralized flag architecture for the emulation of guest instruction execution.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: May 23, 2023
    Assignee: Intel Corporation
    Inventor: Mohammad Abdallah
  • Patent number: 11467839
    Abstract: A method for supporting architecture speculation in an out of order processor is disclosed. The method comprises fetching two threads into the processor, wherein a first thread executes in a speculative state and a second thread executes in a non-speculative state. The method also comprises enabling a speculative scope for an execution of the first thread and a non-speculative scope for an execution of the second thread in an architecture of the processor, wherein the speculative scope and the non-speculative scope can both be fetched into the architecture and be present concurrently.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: October 11, 2022
    Assignee: Intel Corporation
    Inventor: Mohammad Abdallah
  • Patent number: 11294680
    Abstract: A microprocessor implemented method is disclosed. The method includes mapping a plurality of instructions in a guest address space to corresponding instructions in a native address space. The method further includes, for each of one or more guest branch instructions in said native address space fetched during execution, performing the following: determining a youngest prior guest branch target stored in a guest branch target register, determining a branch target for a respective guest branch instruction by adding an offset value for said respective guest branch instruction to said youngest prior guest branch target, where said offset value is adjusted to account for a difference in address in said guest address space between an instruction at a beginning of a guest instruction block and a branch instruction in said guest instruction block. The method further includes creating an entry in said guest branch target register for said branch target.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: April 5, 2022
    Assignee: Intel Corporation
    Inventor: Mohammad A. Abdallah
  • Patent number: 11281481
    Abstract: A system for an agnostic runtime architecture is disclosed. The system includes a system emulation/virtualization converter, an application code converter, and a system converter wherein the system emulation/virtualization converter and the application code converter implement a system emulation process, and wherein the system converter implements a system conversion process for executing code from a guest image. The system converter further comprises a guest fetch logic component for accessing a plurality of guest instructions, a guest fetch buffer coupled to the guest fetch logic component and a branch prediction component for assembling the plurality of guest instructions into a guest instruction block, and a plurality of conversion tables including a first level conversion table and a second level conversion table coupled to the guest fetch buffer for translating the guest instruction block into a corresponding native conversion block.
    Type: Grant
    Filed: July 22, 2015
    Date of Patent: March 22, 2022
    Assignee: INTEL CORPORATION
    Inventor: Mohammad Abdallah
  • Patent number: 11204769
    Abstract: A global front end scheduler to schedule instruction sequences to a plurality of virtual cores implemented via a plurality of partitionable engines. The global front end scheduler includes a thread allocation array to store a set of allocation thread pointers to point to a set of buckets in a bucket buffer in which execution blocks for respective threads are placed, a bucket buffer to provide a matrix of buckets, the bucket buffer including storage for the execution blocks, and a bucket retirement array to store a set of retirement thread pointers that track a next execution block to retire for a thread.
    Type: Grant
    Filed: January 2, 2020
    Date of Patent: December 21, 2021
    Assignee: Intel Corporation
    Inventor: Mohammad Abdallah
  • Publication number: 20210342159
    Abstract: A method and apparatus including a cache controller coupled to a cache memory, wherein the cache controller receives a plurality of cache access requests, performs a pre-sorting of the plurality of cache access requests by a first stage of the cache controller to order the plurality of cache access requests, wherein the first stage functions by performing a presorting and pre-clustering process on the plurality of cache access requests in parallel to map the plurality of cache access requests from a first position to a second position corresponding to ports or banks of a cache memory, performs the combining and splitting of the plurality of cache access request by a second stage of the cache controller, and applies the plurality of cache access requests to the cache memory at line speed.
    Type: Application
    Filed: April 9, 2021
    Publication date: November 4, 2021
    Applicant: Intel Corporation
    Inventor: Mohammad ABDALLAH
  • Patent number: 11163720
    Abstract: An execution unit to execute instructions using a time-lag sliced architecture (TLSA). The execution unit includes a first computation unit and a second computation unit, where each of the first computation unit and the second computation unit includes a plurality of logic slices arranged in order, where each of the plurality of logic slices except a lattermost logic slice is coupled to an immediately following logic slice to provide an output of that logic slice to the immediately following logic slice, where the immediately following logic slice is to execute with a time lag with respect to its immediately previous logic slice. Further, each of the plurality of logic slices of the second computation unit is coupled to a corresponding logic slice of the first computation unit to receive an output of the corresponding logic slice of the first computation unit.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: November 2, 2021
    Assignee: Intel Corporation
    Inventor: Mohammad A. Abdallah
  • Patent number: 11003459
    Abstract: A method and apparatus including a cache controller coupled to a cache memory, wherein the cache controller receives a plurality of cache access requests, performs a pre-sorting of the plurality of cache access requests by a first stage of the cache controller to order the plurality of cache access requests, wherein the first stage functions by performing a presorting and pre-clustering process on the plurality of cache access requests in parallel to map the plurality of cache access requests from a first position to a second position corresponding to ports or banks of a cache memory, performs the combining and splitting of the plurality of cache access request by a second stage of the cache controller, and applies the plurality of cache access requests to the cache memory at line speed.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: May 11, 2021
    Assignee: Intel Corporation
    Inventor: Mohammad Abdallah
  • Patent number: 10908913
    Abstract: A method for a delayed branch implementation by using a front end track table. The method includes receiving an incoming instruction sequence using a global front end, wherein the instruction sequence includes at least one branch, creating a delayed branch in response to receiving the one branch, and using a front end track table to track both the delayed branch the one branch.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: February 2, 2021
    Assignee: Intel Corporation
    Inventor: Mohammad Abdallah
  • Patent number: 10884739
    Abstract: Systems and methods for load canceling in a processor that is connected to an external interconnect fabric are disclosed. As a part of a method for load canceling in a processor that is connected to an external bus, and responsive to a flush request and a corresponding cancellation of pending speculative loads from a load queue, a type of one or more of the pending speculative loads that are positioned in the instruction pipeline external to the processor, is converted from load to prefetch. Data corresponding to one or more of the pending speculative loads that are positioned in the instruction pipeline external to the processor is accessed and returned to cache as prefetch data. The prefetch data is retired in a cache location of the processor.
    Type: Grant
    Filed: May 24, 2018
    Date of Patent: January 5, 2021
    Assignee: INTEL CORPORATION
    Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
  • Publication number: 20200341768
    Abstract: A method for emulating a guest centralized flag architecture by using a native distributed flag architecture. The method includes receiving an incoming instruction sequence using a global front end; grouping the instructions to form instruction blocks, wherein each of the instruction blocks comprise two half blocks; scheduling the instructions of the instruction block to execute in accordance with a scheduler; and using a distributed flag architecture to emulate a centralized flag architecture for the emulation of guest instruction execution.
    Type: Application
    Filed: July 14, 2020
    Publication date: October 29, 2020
    Applicant: Intel Corporation
    Inventor: Mohammad ABDALLAH
  • Patent number: 10810014
    Abstract: A microprocessor implemented method of speculatively maintaining a guest return address stack (GRAS) in a fetch stage of a microprocessor pipeline. The method includes mapping instructions in a guest address space to corresponding instructions in a native address space. For each of one or more function calls made in the native address space, performing the following: (a) pushing a current entry into the GRAS responsive to the function call, where the current entry includes a guest target return address and a corresponding native target return address associated with the function call; (b) popping the current entry from the GRAS responsive to processing a return instruction; (c) comparing the current entry with an entry popped from a return address stack (RAS) maintained at a later stage of the pipeline; and (d) responsive to a mismatch, fetching instructions from the return address in the entry popped from the RAS.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: October 20, 2020
    Assignee: INTEL CORPORATION
    Inventor: Mohammad A. Abdallah
  • Patent number: 10740126
    Abstract: Methods for supporting wide and efficient front-end operation with guest architecture emulation are disclosed. As a part of a method for supporting wide and efficient front-end operation, upon receiving a request to fetch a first far taken branch instruction, a cache line that includes the first far taken branch instruction, a next cache line and a cache line located at the target of the first far taken branch instruction is read. Based on information that is accessed from a data table, the cache line and either the next cache line or the cache line located at the target is fetched in a single cycle.
    Type: Grant
    Filed: October 19, 2018
    Date of Patent: August 11, 2020
    Assignee: Intel Corporation
    Inventors: Mohammad Abdallah, Ankur Groen, Erika Gunadi, Mandeep Singh, Ravishankar Rao
  • Patent number: 10713047
    Abstract: Fast unaligned memory access. hi accordance with a first embodiment of the present invention, a computing device includes a load queue memory structure configured to queue load operations and a store queue memory structure configured to queue store operations. The computing device includes also includes at least one bit configured to indicate the presence of an unaligned address component for an entry of said load queue memory structure, and at least one bit configured to indicate the presence of an unaligned address component for an entry of said store queue memory structure. The load queue memory may also include memory configured to indicate data forwarding of an unaligned address component from said store queue memory structure to said load queue memory structure.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: July 14, 2020
    Assignee: INTEL CORPORATION
    Inventors: Mandeep Singh, Mohammad Abdallah
  • Publication number: 20200174792
    Abstract: A microprocessor implemented method is disclosed. The method includes mapping a plurality of instructions in a guest address space to corresponding instructions in a native address space. The method further includes, for each of one or more guest branch instructions in said native address space fetched during execution, performing the following: determining a youngest prior guest branch target stored in a guest branch target register, determining a branch target for a respective guest branch instruction by adding an offset value for said respective guest branch instruction to said youngest prior guest branch target, where said offset value is adjusted to account for a difference in address in said guest address space between an instruction at a beginning of a guest instruction block and a branch instruction in said guest instruction block. The method further includes creating an entry in said guest branch target register for said branch target.
    Type: Application
    Filed: October 31, 2019
    Publication date: June 4, 2020
    Inventor: Mohammad A. ABDALLAH
  • Publication number: 20200142701
    Abstract: A global front end scheduler to schedule instruction sequences to a plurality of virtual cores implemented via a plurality of partitionable engines. The global front end scheduler includes a thread allocation array to store a set of allocation thread pointers to point to a set of buckets in a bucket buffer in which execution blocks for respective threads are placed, a bucket buffer to provide a matrix of buckets, the bucket buffer including storage for the execution blocks, and a bucket retirement array to store a set of retirement thread pointers that track a next execution block to retire for a thread.
    Type: Application
    Filed: January 2, 2020
    Publication date: May 7, 2020
    Inventor: Mohammad ABDALLAH
  • Patent number: 10592300
    Abstract: A method for forwarding data from the store instructions to a corresponding load instruction in an out of order processor. The method includes accessing an incoming sequence of instructions; reordering the instructions in accordance with processor resources for dispatch and execution; ensuring a closest earlier store in machine order for to a corresponding load, by determining if said store has an actual age but said corresponding load does not have an actual age, then said store is earlier than said corresponding load; if said corresponding load has an actual age but said store does not have an actual age, then said corresponding load is earlier than said store; if neither said corresponding load or said store have an actual age, then a virtual identifier table is used to determine which is earlier; and if both said corresponding load and said store have actual ages, then the actual ages are used to determine which is earlier.
    Type: Grant
    Filed: February 14, 2018
    Date of Patent: March 17, 2020
    Assignee: Intel Corporation
    Inventor: Mohammad Abdallah
  • Patent number: 10585804
    Abstract: Systems and methods for non-blocking implementation of cache flush instructions are disclosed. As a part of a method, data is accessed that is received in a write-back data holding buffer from a cache flushing operation, the data is flagged with a processor identifier and a serialization flag, and responsive to the flagging, the cache is notified that the cache flush is completed. Subsequent to the notifying, access is provided to data then present in the write-back data holding buffer to determine if data then present in the write-back data holding buffer is flagged.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: March 10, 2020
    Assignee: Intel Corporation
    Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
  • Patent number: 10585670
    Abstract: A processor architecture includes a register file hierarchy to implement virtual registers that provide a larger set of registers than those directly supported by an instruction set architecture to facilitate multiple copies of the same architecture register for different processing threads, where the register file hierarchy includes a plurality of hierarchy levels. The processor architecture further includes a plurality of execution units coupled to the register file hierarchy.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: March 10, 2020
    Assignee: Intel Corporation
    Inventor: Mohammad A. Abdallah