Patents by Inventor Edward McLellan

Edward McLellan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11916247
    Abstract: Battery packs according to some embodiments of the present technology may include a first longitudinal beam and a second longitudinal beam. The battery packs may include a plurality of cell blocks disposed between the first longitudinal beam and the second longitudinal beam. The plurality of cell blocks may include first and second cell blocks each characterized by a first side surface proximate the first longitudinal beam, a second side surface, a third side surface proximate the second longitudinal beam, and a fourth side surface. The battery packs may include a first interface material thermally coupling the first side surface of the first cell block with the first longitudinal beam. The battery packs may also include a second interface material thermally coupling the third side surface of the second cell block with the second longitudinal beam.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: February 27, 2024
    Assignee: Apple Inc.
    Inventors: Josef L. Miler, Luke A. Wilhelm, Edward T. Hillstrom, Dirk E. Long, Russell A. McLellan, Yu-Hung Li, Maria N. Luckyanova, Evan D. Maley, Edward T. Sweet
  • Patent number: 11880260
    Abstract: A heterogeneous processor system includes a first processor implementing an instruction set architecture (ISA) including a set of ISA features and configured to support a first subset of the set of ISA features. The heterogeneous processor system also includes a second processor implementing the ISA including the set of ISA features and configured to support a second subset of the set of ISA features, wherein the first subset and the second subset of the set of ISA features are different from each other. When the first subset includes an entirety of the set of ISA features, the lower-feature second processor is configured to execute an instruction thread by consuming less power and with lower performance than the first processor.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: January 23, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Elliot H. Mednick, Edward McLellan
  • Patent number: 11645209
    Abstract: The size of a cache is modestly increased so that a short pointer to a predicted next memory address in the same cache is added to each cache line in the cache. In response to a cache hit, the predicted next memory address identified by the short pointer in the cache line of the hit along with an associated entry are pushed to a next faster cache when a valid short pointer to the predicted next memory address is present in the cache line of the hit.
    Type: Grant
    Filed: August 3, 2021
    Date of Patent: May 9, 2023
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Shay Gal-On, Srilatha Manne, Edward McLellan, Alexander Rucker
  • Publication number: 20230015240
    Abstract: Described are systems and methods for power management. A processing system includes one or more cores and a connected power management unit (PMU). The PMU is selected from one of: a first level PMU which can power scale a; a second level PMU which can independently control power from a shared cluster power supply to each core of two or more cores in a cluster; a third level PMU where each core includes a power monitor which can track power performance metrics of an associated core; and a fourth level PMU when a complex includes multiple clusters and each cluster includes a set of the one or more cores, the fourth level PMU including a complex PMU and a cluster PMU for each of the multiple clusters, the complex PMU and cluster PMUs provide two-tier power management. Higher level PMUs include power management functionality of lower level PMUs.
    Type: Application
    Filed: June 28, 2022
    Publication date: January 19, 2023
    Inventor: Edward Mclellan
  • Publication number: 20210365378
    Abstract: The size of a cache is modestly increased so that a short pointer to a predicted next memory address in the same cache is added to each cache line in the cache. In response to a cache hit, the predicted next memory address identified by the short pointer in the cache line of the hit along with an associated entry are pushed to a next faster cache when a valid short pointer to the predicted next memory address is present in the cache line of the hit.
    Type: Application
    Filed: August 3, 2021
    Publication date: November 25, 2021
    Inventors: Shay Gal-On, Srilatha Manne, Edward McLellan, Alexander Rucker
  • Patent number: 11080195
    Abstract: The size of a cache is modestly increased so that a short pointer to a predicted next memory address in the same cache is added to each cache line in the cache. In response to a cache hit, the predicted next memory address identified by the short pointer in the cache line of the hit along with an associated entry are pushed to a next faster cache when a valid short pointer to the predicted next memory address is present in the cache line of the hit.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: August 3, 2021
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Shay Gal-On, Srilatha Manne, Edward McLellan, Alexander Rucker
  • Publication number: 20210081323
    Abstract: The hit rate of a L1 icache when operating with large programs is substantially improved by reserving a section of the L1 icache for regular instructions and a section for non-instruction information. Instructions are prefetched for storage in the instruction section of the L1 icache based on information in the non-instruction section of the L1 icache.
    Type: Application
    Filed: September 12, 2019
    Publication date: March 18, 2021
    Inventors: Edward MCLELLAN, Alexander RUCKER, Shay GAL-ON, Srilatha MANNE
  • Publication number: 20210073132
    Abstract: The size of a cache is modestly increased so that a short pointer to a predicted next memory address in the same cache is added to each cache line in the cache. In response to a cache hit, the predicted next memory address identified by the short pointer in the cache line of the hit along with an associated entry are pushed to a next faster cache when a valid short pointer to the predicted next memory address is present in the cache line of the hit.
    Type: Application
    Filed: September 10, 2019
    Publication date: March 11, 2021
    Inventors: Shay Gal-On, Srilatha Manne, Edward McLellan, Alexander Rucker
  • Publication number: 20200393887
    Abstract: A heterogeneous processor system includes a first processor implementing an instruction set architecture (ISA) including a set of ISA features and configured to support a first subset of the set of ISA features. The heterogeneous processor system also includes a second processor implementing the ISA including the set of ISA features and configured to support a second subset of the set of ISA features, wherein the first subset and the second subset of the set of ISA features are different from each other. When the first subset includes an entirety of the set of ISA features, the lower-feature second processor is configured to execute an instruction thread by consuming less power and with lower performance than the first processor.
    Type: Application
    Filed: June 25, 2020
    Publication date: December 17, 2020
    Inventors: Elliot H. MEDNICK, Edward MCLELLAN
  • Patent number: 10698472
    Abstract: A heterogeneous processor system includes a first processor implementing an instruction set architecture (ISA) including a set of ISA features and configured to support a first subset of the set of ISA features. The heterogeneous processor system also includes a second processor implementing the ISA including the set of ISA features and configured to support a second subset of the set of ISA features, wherein the first subset and the second subset of the set of ISA features are different from each other. When the first subset includes an entirety of the set of ISA features, the lower-feature second processor is configured to execute an instruction thread by consuming less power and with lower performance than the first processor.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: June 30, 2020
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Elliot H. Mednick, Edward McLellan
  • Patent number: 10540181
    Abstract: Instructions are executed in a pipeline of a processor, where each instruction is associated with a particular context. A first storage stores branch prediction information characterizing results of branch instructions previously executed. The first storage is dynamically partitioned into partitions of one or more entries. Dynamically partitioning includes updating a partition to include an additional entry by associating the additional entry with a particular subset of one or more contexts. A predicted branch result is determined based on at least a portion of the branch prediction information. An actual branch result provided based on an executed branch instruction is used to update the branch prediction information. Providing a predicted branch result for a first branch instruction includes retrieving a first entry from a first partition based at least in part on an identified first subset of one or more contexts associated with the first branch instruction.
    Type: Grant
    Filed: January 25, 2018
    Date of Patent: January 21, 2020
    Assignee: Marvell World Trade Ltd.
    Inventors: Shubhendu Sekhar Mukherjee, Richard Eugene Kessler, David Kravitz, Edward McLellan, Rabin Sugumar
  • Publication number: 20190227803
    Abstract: Instructions are executed in a pipeline of a processor, where each instruction is associated with a particular context. A first storage stores branch prediction information characterizing results of branch instructions previously executed. The first storage is dynamically partitioned into partitions of one or more entries. Dynamically partitioning includes updating a partition to include an additional entry by associating the additional entry with a particular subset of one or more contexts. A predicted branch result is determined based on at least a portion of the branch prediction information. An actual branch result provided based on an executed branch instruction is used to update the branch prediction information. Providing a predicted branch result for a first branch instruction includes retrieving a first entry from a first partition based at least in part on an identified first subset of one or more contexts associated with the first branch instruction.
    Type: Application
    Filed: January 25, 2018
    Publication date: July 25, 2019
    Inventors: Shubhendu Sekhar MUKHERJEE, Richard Eugene KESSLER, David KRAVITZ, Edward MCLELLAN, Rabin SUGUMAR
  • Publication number: 20190129489
    Abstract: A heterogeneous processor system includes a first processor implementing an instruction set architecture (ISA) including a set of ISA features and configured to support a first subset of the set of ISA features. The heterogeneous processor system also includes a second processor implementing the ISA including the set of ISA features and configured to support a second subset of the set of ISA features, wherein the first subset and the second subset of the set of ISA features are different from each other. When the first subset includes an entirety of the set of ISA features, the lower-feature second processor is configured to execute an instruction thread by consuming less power and with lower performance than the first processor.
    Type: Application
    Filed: October 27, 2017
    Publication date: May 2, 2019
    Inventors: Elliot H. MEDNICK, Edward MCLELLAN
  • Patent number: 10142258
    Abstract: Methods and apparatus of delegating instructions or data from a CU to an NOC node in a network on chip (NOC) is disclosed. The NOC node executes the delegated instructions or processes the delegated data. An NOC controller (NCC), which is operatively coupled to the CU and the NOC node, facilitates delegating the instructions or data from the CU to the NOC node.
    Type: Grant
    Filed: April 8, 2016
    Date of Patent: November 27, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Greg Sadowski, Edward McLellan
  • Publication number: 20170295111
    Abstract: Methods and apparatus of delegating instructions or data from a CU to an NOC node in a network on chip (NOC) is disclosed. The NOC node executes the delegated instructions or processes the delegated data. An NOC controller (NCC), which is operatively coupled to the CU and the NOC node, facilitates delegating the instructions or data from the CU to the NOC node.
    Type: Application
    Filed: April 8, 2016
    Publication date: October 12, 2017
    Inventors: Greg Sadowski, Edward McLellan
  • Patent number: 9110802
    Abstract: A method of implementing a mask load or mask store instruction by a processor is provided. The method may include receiving the mask load or mask store instruction, a location of a memory operand and a location of corresponding mask bits associated with the memory operand, breaking the received memory operand into a plurality of sub-operands and executing the mask load or mask store instruction on each of the plurality of sub-operands using a fastpath operation or using microcode, wherein the respective mask load or mask store instruction loads or stores each of the plurality of sub-operands based upon the corresponding mask bits.
    Type: Grant
    Filed: November 5, 2010
    Date of Patent: August 18, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Kelvin Goveas, Edward McLellan, Steven Beigelmacher, David Kroesche, Michael Clark
  • Publication number: 20120117420
    Abstract: A method of implementing a mask load or mask store instruction by a processor is provided. The method may include receiving the mask load or mask store instruction, a location of a memory operand and a location of corresponding mask bits associated with the memory operand, breaking the received memory operand into a plurality of sub-operands and executing the mask load or mask store instruction on each of the plurality of sub-operands using a fastpath operation or using microcode, wherein the respective mask load or mask store instruction loads or stores each of the plurality of sub-operands based upon the corresponding mask bits.
    Type: Application
    Filed: November 5, 2010
    Publication date: May 10, 2012
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventors: Kelvin GOVEAS, Edward MCLELLAN, Steven BEIGELMACHER, David KROESCHE, Michael CLARK
  • Publication number: 20060292292
    Abstract: An integrated circuit (203) for use in processing streams of data generally and streams of packets in particular. The integrated circuit (203) includes a number of packet processors (307, 313, 303), a table look up engine (301), a queue management engine (305) and a buffer management engine (315). The packet processors (307, 313, 303) include a receive processor (421), a transmit processor (427) and a risc core processor (401), all of which are programmable. The receive processor (421) and the core processor (401) cooperate to receive and route packets being received and the core processor (401) and the transmit processor (427) cooperate to transmit packets. Routing is done by using information from the table look up engine (301) to determine a queue (215) in the queue management engine (305) which is to receive a descriptor (217) describing the received packet's payload.
    Type: Application
    Filed: August 25, 2006
    Publication date: December 28, 2006
    Inventors: Thomas Brightman, Andrew Funk, David Husak, Edward McLellan, Andrew Brown, John Brown, James Farrell, Donald Priore, Mark Sankey, Paul Schmitt
  • Patent number: 5758142
    Abstract: A predictor which chooses between two or more predictors is described. The predictor includes a first component predictor which operates according to a first algorithm to produce a prediction of an action and a second component predictor which operates according to a second algorithm to produce a prediction of said action. The predictor also includes means, coupled to each of said first and second predictors, for choosing between predictions provided from said predictors to provide a prediction of the action from the predictor. The predictor can be used to predict outcomes of branches, cache hits, prefetched instruction sequences, and so forth.
    Type: Grant
    Filed: May 31, 1994
    Date of Patent: May 26, 1998
    Assignee: Digital Equipment Corporation
    Inventors: Scott McFarling, Simon C. Steely, Jr., Joel Emer, Edward McLellan