Patents by Inventor Maija Kuusela

Maija Kuusela has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20040260732
    Abstract: An electronic system comprises a processor, memory coupled to the processor, and an application programming interface that causes an embedded garbage collection object to be active. The memory stores one or more objects that selectively have references from root objects. The embedded garbage collection object preferably uses control data to cause objects to be removed from said memory, the removed objects comprise those objects that were created while an embedded garbage collection object was active and that do not have references from root objects.
    Type: Application
    Filed: April 22, 2004
    Publication date: December 23, 2004
    Inventors: Gerard Chauvel, Serge Lasserre, Dominique D'Inverno, Maija Kuusela, Gilbert Cabillic, Jean-Philippe Lesot, Michel Banatre, Jean-Paul Routeau, Salam Majoul, Frederic Parain
  • Publication number: 20040261085
    Abstract: In some embodiments, a storage medium comprises application software that performs one or more operations and that directly manages a device. The application software comprises instructions that initialize an application data structure (e.g., an object or array) usable by the application software to manage the device and also comprises instructions that map the application data structure to a memory associated with the device without the use of a device driver. In other embodiments, a method comprises initializing an application data structure to manage a hardware device and mapping the application data structure to a memory associated with the hardware device without the use of a device driver. The application data structure may store a single dimensional data structure or a multi-dimensional data structure. In some embodiments, the device being managed by the application software may comprise a display and the application software may comprise Java code.
    Type: Application
    Filed: April 22, 2004
    Publication date: December 23, 2004
    Inventors: Gerard Chauvel, Serge Lasserre, Dominique D'Inverno, Maija Kuusela, Gilbert Cabillic, Jean-Philippe Lesot, Michel Banatre, Jean-Paul Routeau, Salam Majoul, Frederic Parain
  • Patent number: 6769052
    Abstract: A digital system and method of operation is provided in which several processors (590n) are connected to a shared cache memory resource (500). A translation lookaside buffer (TLB) (310n) is connected to receive a request virtual address from each respective processor. A set of address regions (pages) is defined within an address space of a back-up memory associated with the cache and write allocation in the cache is defined on a page basis. Each TLB has a set of entries that correspond to pages of address space and each entry provides a write allocate attribute (550) for the associated page of address space. During operation of the system, software programs are executed and memory transactions are performed. A write allocate attribute signal (550) is provided with each write transaction request. In this manner, the attribute signal is responsive to the value of the write allocation attribute bit assigned to an address region that includes the address of the write transaction request.
    Type: Grant
    Filed: May 29, 2002
    Date of Patent: July 27, 2004
    Assignee: Texas Instruments Incorporated
    Inventors: Gerard Chauvel, Maija Kuusela, Dominique D'Inverno
  • Patent number: 6751706
    Abstract: A digital system is provided with a several processors, a private level one (L1) cache associated with each processor, a shared level two (L2) cache having several segments per entry, and a level three (L3) physical memory. The shared L2 cache architecture is embodied with 4-way associativity, four segments per entry and four valid and dirty bits. When the L2-cache misses, the penalty to access to data within the L3 memory is high. The system supports miss under miss to let a second miss interrupt a segment prefetch being done in response to a first miss. Thus, an interruptible SDRAM to L2-cache prefetch system with miss under miss support is provided. A shared translation look-aside buffer (TLB) is provided for L2 accesses, while a private TLB is associated with each processor. A micro TLB (&mgr;TLB) is associated with each resource that can initiate a memory transfer.
    Type: Grant
    Filed: August 17, 2001
    Date of Patent: June 15, 2004
    Assignee: Texas Instruments Incorporated
    Inventors: Gerard Chauvel, Maija Kuusela, Dominique D'Inverno, Serge Lasserre
  • Publication number: 20040078552
    Abstract: A processor (e.g., a co-processor) comprising a decoder coupled to a pre-decoder, in which the decoder decodes a current instruction in parallel with the pre-decoder pre-decoding a subsequent instruction. In particular, the pre-decoder examines at least five Bytecodes in parallel with the decoder decoding a current instruction. The pre-decoder determines if a subsequent instruction contains a prefix. If a prefix is detected in at least one of the five Bytecodes, a program counter skips the prefix and changes the behavior of the decoder during the decoding of the subsequent instruction.
    Type: Application
    Filed: July 31, 2003
    Publication date: April 22, 2004
    Applicant: Texas Instrument Incorporated
    Inventors: Gerard Chauvel, Serge Lasserre, Maija Kuusela
  • Publication number: 20040078557
    Abstract: Methods and apparatuses are disclosed for implementing a processor with a split stack. In some embodiments, the processor includes a main stack and a micro-stack. The micro-stack preferably is implemented in the core of the processor, whereas the main stack may be implemented in areas that are external to the core of the processor. Operands are preferably provided to an arithmetic logic unit (ALU) by the micro-stack, and in the case of underflow (micro-stack empty), operands may be fetched from the main stack. Operands are written to the main stack during overflow (micro-stack full) or by explicit flushing of the micro-stack. By optimizing the size of the micro-stack, the number of operands fetched from the main stack may be reduced, and consequently the processor's power consumption may be reduced.
    Type: Application
    Filed: July 31, 2003
    Publication date: April 22, 2004
    Applicant: Texas Instruments Incorporated
    Inventors: Gerard Chauvel, Maija Kuusela, Serge Lasserre
  • Publication number: 20040078550
    Abstract: A system comprises a first processor, a second processor coupled to the first processor, memory coupled to, and shared by, the first and second processors, and a synchronization unit coupled to the first and second processors. The second processor preferably comprises stack storage that resides in the core of the second processor. Further, the second processor executes stack-based instructions while the first processor executes one or more tasks including, for example, managing the memory via an operating system that executes only on the first processor. Associated methods are also disclosed.
    Type: Application
    Filed: July 31, 2003
    Publication date: April 22, 2004
    Applicant: Texas Instruments Incorporated
    Inventors: Gerard Chauvel, Serge Lasserre, Maija Kuusela, Dominique D'Inverno
  • Publication number: 20040078522
    Abstract: A cache subsystem may comprise a multi-way set associative cache and a data memory that holds a contiguous block of memory defined by an address stored in a register. Local variables (e.g., Java local variables) may be stored in the data memory. The data memory preferably is adapted to store two groups of local variables. A first group comprises local variables associated with finished methods and a second group comprises local variables associated with unfinished methods. Further, local variables are saved to, or fetched from, external memory upon a context change based on a threshold value differentiating the first and second groups. The first value may comprise a threshold address or an allocation bit associated with each of a plurality of lines forming the data memory.
    Type: Application
    Filed: July 31, 2003
    Publication date: April 22, 2004
    Applicant: Texas Instruments Incorporated
    Inventors: Serge Lasserre, Maija Kuusela, Gerard Chauvel
  • Publication number: 20040078523
    Abstract: A processor preferably comprises a processing core that generates memory addresses to access a main memory and on which a plurality of methods operate. Each method uses its own set of local variables. The processor also includes a cache subsystem comprising a multi-way set associative cache and a data memory that holds a contiguous block of memory defined by an address stored in a register, wherein local variables are stored in said data memory.
    Type: Application
    Filed: July 31, 2003
    Publication date: April 22, 2004
    Applicant: Texas Instruments Incorporated
    Inventors: Gerard Chauvel, Maija Kuusela, Dominique D'Inverno
  • Publication number: 20040078528
    Abstract: A system comprises a first processor having cache memory, a second processor having cache memory and a coherence buffer that can be enabled and disabled by the first processor. The system also comprises a memory subsystem coupled to the first and second processors. For a write transaction originating from the first processor, the first processor enables the second processor's coherence buffer, and information associated with the first processor's write transaction is stored in the second processor's coherence buffer to maintain data coherency between the first and second processors.
    Type: Application
    Filed: July 31, 2003
    Publication date: April 22, 2004
    Applicant: Texas Instruments Incorporated
    Inventors: Gerard Chauvel, Serge Lasserre, Maija Kuusela, Dominique D'Inverno
  • Publication number: 20040059893
    Abstract: A processor preferably comprises a processing core that generates memory addresses to access a memory and on which a plurality of methods operate, a cache coupled to the processing core, and a programmable register containing a pointer to a currently active method's set of local variables. The cache may be used to store one or more sets of local variables, each set being used by a method. Further, the cache may include at least two sets of local variables corresponding to different methods, one method calling the other method and the sets of local variables may be separated by a pointer to the set of local variables corresponding to the calling method.
    Type: Application
    Filed: July 31, 2003
    Publication date: March 25, 2004
    Applicant: Texas Instruments Incorporated
    Inventors: Gerard Chauvel, Maija Kuusela, Dominique D'Inverno
  • Publication number: 20030217090
    Abstract: A mobile device (10) manages tasks (18) using a scheduler (20) for scheduling tasks on multiple processors (12). To conserve energy, the set of tasks to be scheduled are divided into two (or more) subsets, which are scheduled according to different procedures. In a specific embodiment, the first subset contains tasks with the highest energy consumption deviation based on the processor that executes the task. This subset is scheduled according to a power-aware procedure for scheduling tasks primarily based on energy consumption criteria. If there is no failure, the second subset is scheduled according to a real-time constrained procedure that schedules tasks primarily based on the deadlines associated with the various tasks in the second subset. If there is a failure in either procedure, one or more tasks with the lowest energy consumption deviation are moved from the first subset to the second subset and the scheduling is repeated.
    Type: Application
    Filed: May 20, 2002
    Publication date: November 20, 2003
    Inventors: Gerard Chauvel, Dominique D'Inverno, Serge Lasserre, Maija Kuusela, Gilbert Cabillic, Jean-Philippe Lesot, Michel Banatre, Frederic Parain, Jean-Paul Routeau, Salam Majoul
  • Publication number: 20030101320
    Abstract: A digital system and method of operation is provided in which several processors (590n) are connected to a shared cache memory resource (500). A translation lookaside buffer (TLB) (310n) is connected to receive a request virtual address from each respective processor. A set of address regions (pages) is defined within an address space of a back-up memory associated with the cache and write allocation in the cache is defined on a page basis. Each TLB has a set of entries that correspond to pages of address space and each entry provides a write allocate attribute (550) for the associated page of address space. During operation of the system, software programs are executed and memory transactions are performed. A write allocate attribute signal (550) is provided with each write transaction request. In this manner, the attribute signal is responsive to the value of the write allocation attribute bit assigned to an address region that includes the address of the write transaction request.
    Type: Application
    Filed: May 29, 2002
    Publication date: May 29, 2003
    Inventors: Gerard Chauvel, Maija Kuusela, Dominique D'Inverno
  • Publication number: 20020073282
    Abstract: A digital system is provided with a several processors, a private level one (L1) cache associated with each processor, a shared level two (L2) cache having several segments per entry, and a level three (L3) physical memory. The shared L2 cache architecture is embodied with 4-way associativity, four segments per entry and four valid and dirty bits. When the L2-cache misses, the penalty to access to data within the L3 memory is high. The system supports miss under miss to let a second miss interrupt a segment prefetch being done in response to a first miss. Thus, an interruptible SDRAM to L2-cache prefetch system with miss under miss support is provided. A shared translation look-aside buffer (TLB) is provided for L2 accesses, while a private TLB is associated with each processor. A micro TLB (&mgr;TLB) is associated with each resource that can initiate a memory transfer.
    Type: Application
    Filed: August 17, 2001
    Publication date: June 13, 2002
    Inventors: Gerard Chauvel, Maija Kuusela, Dominique D'Inverno, Serge Lasserre
  • Publication number: 20020069341
    Abstract: A digital system is provided with a several processors, a private level one (L1) cache associated with each processor, a shared level two (L2) cache having several segments per entry, and a level three (L3) physical memory. The shared L2 cache architecture is embodied with 4-way associativity, four segments per entry and four valid and dirty bits. When the L2-cache misses, the penalty to access to data within the L3 memory is high. The system supports miss under miss to let a second miss 612 interrupt a segment prefetch 605(1-4) being done in response to a first miss 602. Thus, an interruptible SDRAM to L2-cache prefetch system with miss under miss support is provided.
    Type: Application
    Filed: August 17, 2001
    Publication date: June 6, 2002
    Inventors: Gerard Chauvel, Maija Kuusela, Dominique D'Inverno