Patents by Inventor Susan Carrie
Susan Carrie has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10642709Abstract: A method for refining multithread software executed on a processor chip of a computer system. The envisaged processor chip has at least one processor core and a memory cache coupled to the processor core and configured to cache at least some data read from memory. The method includes, in logic distinct from the processor core and coupled to the memory cache, observing a sequence of operations of the memory cache and encoding a sequenced data stream that traces the sequence of operations observed.Type: GrantFiled: April 19, 2011Date of Patent: May 5, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Susan Carrie, Vijay Balakrishnan
-
Patent number: 9563369Abstract: Systems and methods for applying a fine-grained QoS logic are provided. The system may include a memory controller, the memory controller configured to receive memory access requests from a plurality of masters via a bus fabric. The memory controller determines the priority class of each of the plurality of masters, and further determines the amount of memory data bus bandwidth consumed by each master on the memory data bus. Based on the priority class assigned to each of the masters and the amount of memory data bus bandwidth consumed by each master, the memory controller applies a fine-grained QoS logic to compute a schedule for the memory requests. Based on this schedule, the memory controller converts the memory requests to memory commands, sends the memory commands to a memory device via a memory command bus, and receives a response from the memory device via a memory data bus.Type: GrantFiled: April 14, 2014Date of Patent: February 7, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Nhon Toai Quach, Susan Carrie, Jeffrey Andrews, John Sell, Kevin Po
-
Patent number: 9424490Abstract: Embodiments are disclosed that relate to processing image pixels. For example, one disclosed embodiment provides a system for classifying pixels comprising retrieval logic; a pixel storage allocation including a plurality of pixel slots, each pixel slot being associated individually with a pixel, where the retrieval logic is configured to cause the pixels to be allocated into the pixel slots in an input sequence; pipelined processing logic configured to output, for each of the pixels, classification information associated with the pixel; and scheduling logic configured to control dispatches from the pixel slots to the pipelined processing logic, where the scheduling logic and pipelined processing logic are configured to act in concert to generate the classification information for the pixels in an output sequence that differs from and is independent of the input sequence.Type: GrantFiled: June 27, 2014Date of Patent: August 23, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Adam James Muff, John Allen Tardif, Susan Carrie, Mark J. Finocchio, Kyungsuk David Lee, Christopher Douglas Edmonds, Randy Crane
-
Patent number: 9292767Abstract: A computing device for use in decision tree computation is provided. The computing device may include a software program executed by a processor using portions of memory of the computing device, the software program being configured to receive user input from a user input device associated with the computing device, and in response, to perform a decision tree task. The computing device may further include a decision tree computation device implemented in hardware as a logic circuit distinct from the processor, and which is linked to the processor by a communications interface. The decision tree computation device may be configured to receive an instruction to perform a decision tree computation associated with the decision tree task from the software program, process the instruction, and return a result to the software program via the communication interface.Type: GrantFiled: January 5, 2012Date of Patent: March 22, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Jason Oberg, Ken Eguro, Victor Tirva, Padma Parthasarathy, Susan Carrie, Alessandro Forin, Jonathan Chow
-
Publication number: 20150379376Abstract: Embodiments are disclosed that relate to processing image pixels. For example, one disclosed embodiment provides a system for classifying pixels comprising retrieval logic; a pixel storage allocation including a plurality of pixel slots, each pixel slot being associated individually with a pixel, where the retrieval logic is configured to cause the pixels to be allocated into the pixel slots in an input sequence; pipelined processing logic configured to output, for each of the pixels, classification information associated with the pixel; and scheduling logic configured to control dispatches from the pixel slots to the pipelined processing logic, where the scheduling logic and pipelined processing logic are configured to act in concert to generate the classification information for the pixels in an output sequence that differs from and is independent of the input sequence.Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Inventors: Adam James Muff, John Allen Tardif, Susan Carrie, Mark J. Finocchio, Kyungsuk David Lee, Christopher Douglas Edmonds, Randy Crane
-
Publication number: 20150293709Abstract: Systems and methods for applying a fine-grained QoS logic are provided. The system may include a memory controller, the memory controller configured to receive memory access requests from a plurality of masters via a bus fabric. The memory controller determines the priority class of each of the plurality of masters, and further determines the amount of memory data bus bandwidth consumed by each master on the memory data bus. Based on the priority class assigned to each of the masters and the amount of memory data bus bandwidth consumed by each master, the memory controller applies a fine-grained QoS logic to compute a schedule for the memory requests. Based on this schedule, the memory controller converts the memory requests to memory commands, sends the memory commands to a memory device via a memory command bus, and receives a response from the memory device via a memory data bus.Type: ApplicationFiled: April 14, 2014Publication date: October 15, 2015Applicant: Microsoft CorporationInventors: Nhon Toai Quach, Susan Carrie, Jeffrey Andrews, John Sell, Kevin Po
-
Publication number: 20130179377Abstract: A computing device for use in decision tree computation is provided. The computing device may include a software program executed by a processor using portions of memory of the computing device, the software program being configured to receive user input from a user input device associated with the computing device, and in response, to perform a decision tree task. The computing device may further include a decision tree computation device implemented in hardware as a logic circuit distinct from the processor, and which is linked to the processor by a communications interface. The decision tree computation device may be configured to receive an instruction to perform a decision tree computation associated with the decision tree task from the software program, process the instruction, and return a result to the software program via the communication interface.Type: ApplicationFiled: January 5, 2012Publication date: July 11, 2013Inventors: Jason Oberg, Ken Eguro, Victor Tirva, Padma Parthasarathy, Susan Carrie, Alessandro Forin, Jonathan Chow
-
Publication number: 20120272011Abstract: A method for refining multithread software executed on a processor chip of a computer system. The envisaged processor chip has at least one processor core and a memory cache coupled to the processor core and configured to cache at least some data read from memory. The method includes, in logic distinct from the processor core and coupled to the memory cache, observing a sequence of operations of the memory cache and encoding a sequenced data stream that traces the sequence of operations observed.Type: ApplicationFiled: April 19, 2011Publication date: October 25, 2012Applicant: MICROSOFT CORPORATIONInventors: Susan Carrie, Vijay Balakrishnan
-
Patent number: 8239866Abstract: Software rendering and fine grained parallelism are utilized to reduce/avoid memory latency in a multi-processor (MP) system. According to one embodiment, the management of the transfer of data from one processor to another in the MP environment is moved into a low overhead hardware system. The low overhead hardware system may be a FIFO (“First In First Out”) hardware control. Each FIFO may be real or virtual.Type: GrantFiled: April 24, 2009Date of Patent: August 7, 2012Assignee: Microsoft CorporationInventor: Susan Carrie
-
Publication number: 20120159090Abstract: Versions of a multimedia computer system architecture are described which satisfy quality of service (QoS) guarantees for multimedia applications such as game applications while allowing platform resources, hardware resources in particular, to scale up or down over time. Computing resources of the computer system are partitioned into a platform partition and an application partition, each including its own central processing unit (CPU) and, optionally, graphics processing unit (GPU). To enhance scalability of resources up or down, the platform partition includes one or more hardware resources which are only accessible by the multimedia application via a software interface. Additionally, outside the partitions may be other resources shared by the partitions or which provide general purpose computing resources.Type: ApplicationFiled: December 16, 2010Publication date: June 21, 2012Applicant: MICROSOFT CORPORATIONInventors: Jeffrey Andrews, John V. Sell, Susan Carrie, Mark S. Grossman, John Tardif, Nicholas R. Baker
-
Publication number: 20100275208Abstract: Software rendering and fine grained parallelism are utilized to reduce/ovoid memory latency in a multi-processor (MP) system. According to one embodiment, the management of the transfer of data from one processor to another in the MP environment is moved into a low overhead hardware system. The low overhead hardware system may be a FIFO (“First In First Out”) hardware control. Each FIFO may be real or virtual.Type: ApplicationFiled: April 24, 2009Publication date: October 28, 2010Applicant: MICROSOFT CORPORATIONInventor: Susan Carrie
-
Patent number: 7447777Abstract: Systems and related methods are described for handling one or more resource requests. A protocol engine receives a resource request in accordance with a prescribed protocol, and a classification engine determines a desired class of service for the request. An analysis engine optionally analyzes the request, and, responsive thereto, determines a desired sub-class of service for the request. A policy engine then allocates a resource to the request responsive to one or both of the desired class of service, and the desired sub-class of service.Type: GrantFiled: February 11, 2002Date of Patent: November 4, 2008Assignee: Extreme NetworksInventors: Ratinder Paul Singh Ahuja, Susan Carrie, Chien C. Chou, Erik De La Iglesia, Miguel Gomez, Liang Liu, Ricky K. Lowe, Rahoul Puri, Kiet Tran, Mark Aaron Wallace, Wei Wang, Todd E. Wayne, Hui Zhang
-
Publication number: 20070006040Abstract: A debugging architecture includes a set of debug counters for counting one or more events based on a set of signals from a device being monitored. The architecture provides for observing the outputs of the debug counters during operation of the device. The outputs of the counters are provided to an output bus (e.g., a Debug Bus) via an output bus interface during operation of the device being monitored. A data gathering system can access the output bus in order to gather the data from the counters for analysis.Type: ApplicationFiled: June 30, 2005Publication date: January 4, 2007Applicant: Microsoft CorporationInventors: Susan Carrie, Jeffrey Andrews
-
Patent number: 7152124Abstract: A network switch architected using multiple processor engines includes a method and system for ensuring temporal consistency of data and resources as packet traffic flows through the switch. Upon receiving a connection request, the switch internally associates a semaphore with the connection. The semaphore is distributed and stored at the processing engines. Each of the processing engines performs specific operations relating to incoming packets associated with the connection. Internal messages are passed between the processing engines to coordinate and control these operations. Some of these messages can include a semaphore value. Upon receiving such a message, a processing engine compares the semaphore value to a stored semaphore. Packets relating to the connection identified by the message are processed if there is a match between the semaphores. Also, the semaphore value can be moved from one processing engine to another in order to control the allocation and de-allocation of resources.Type: GrantFiled: February 11, 2002Date of Patent: December 19, 2006Assignee: Extreme NetworksInventors: Rahoul Puri, Susan Carrie, Erik de la Iglesia
-
Publication number: 20060190703Abstract: Detecting a stall condition associated with processor instructions within one or more threads and generating a no-dispatch condition. The stall condition can be detected by hardware and/or software before and/or during processor instruction execution. The no-dispatch condition can be associated with a number of processing cycles and an instruction from a particular thread. As a result of generating the no-dispatch condition, processor instructions from other threads may be dispatched into the execution slot of an available execution pipeline. After a period of time, the instruction associated with the stall can be fetched and executed.Type: ApplicationFiled: February 24, 2005Publication date: August 24, 2006Applicant: Microsoft CorporationInventor: Susan Carrie
-
Patent number: 5671235Abstract: In a semiconductor device having a processor for processing digital data and RAM for storing the digital data, an apparatus for accessing the state of the digital data stored in the RAM during system operation for testing purposes. A stall controller is used to stall the processor at a specified point of execution during system operation. The state of the processor at that particular point is shifted out of the registers by using a scan chain and temporarily stored into a buffer. A memory controller then instructs the RAM to write the data of interest into a specific set of test registers. The scan chain is routed through these test registers so that it can serially shift out the data written from the RAM. Thereby, the RAM contents can be accessed with minimal overhead by using the scan chain. Once the data has been shifted out from test registers, the current state of the processor that was stored into the buffer is fed back to the processor.Type: GrantFiled: December 4, 1995Date of Patent: September 23, 1997Assignee: Silicon Graphics, Inc.Inventors: Derek Bosch, Susan Carrie
-
Patent number: 5201030Abstract: The distance between the intensity value of the base value (that is, the closest quantized intensity value less than the input intensity value) and the input intensity value is adjusted according to a mapping function between the size of the interval between the intensity values of the base value and the base value +1 (i.e., the next larger quantized intensity value) and the range of values in the dither matrix. By adjusting the distance between the base value and the input intensity value, the correct proportion of base values and base values +1 is maintained regardless of the difference in size of the interval and the range of the dither matrix, thereby insuring that the intermediate intensity values between quantized values are accurately simulated.Type: GrantFiled: July 31, 1992Date of Patent: April 6, 1993Assignee: Sun Microsystems, Inc.Inventor: Susan Carrie
-
Patent number: 5091717Abstract: A computer system comprising a display memory, a window indentification memory, logic circuitry for ascertaining that information to be stored at each position of the display memory is in the correct window by comparing the window number in the window identification memory with the window number of information to be sent to the display memory, and a window identification look-up table activated by window identification signals for providing an output to select the number of bits of color information to be output from the display memory to provide color information for an output device.Type: GrantFiled: May 1, 1989Date of Patent: February 25, 1992Assignee: Sun Microsystems, Inc.Inventors: Susan Carrie, Serdar Ergene, James Gosling
-
Patent number: 5016166Abstract: The system of the present invention provides for the synchronization of access devices connected through the system's memory management unit and is particularly useful in a multi-tasking computer system in which multiple processes access the same device. In the method and apparatus of the present invention, devices that are connected to the system through the MMU are controlled using the page fault mechanism of the MMU and the page fault handler in each segment. Addresses are allocated in the process address space for each process to provide for the addressing of the devices and device queues connected through the MMU, such that one device or one device queue is mapped into one segment of each process address space that will access the device. The "valid bits" associated with each page in a segment are turned on/off by the process or operating system in order to control the device.Type: GrantFiled: September 28, 1989Date of Patent: May 14, 1991Assignee: Sun Microsystems, Inc.Inventors: James Van Loo, Susan Carrie, Jerald Evans
-
Patent number: 5016161Abstract: The system of the present invention provides for the flow control of commands to devices connected through the system's memory management unit and is particularly useful in a multi-tasking computer system in which multiple processes access the same device. In the method and apparatus of the present invention, devices that are connected to the system through the MMU are controlled using the page fault mechanism of the MMU and the page fault handler in each segment. Addresses are allocated in the process address space for each process to provide for the addressing of the devices and device queues connected through the MMU, such that one device or one device queue is mapped into one segment of each process address space that will access the device. The "valid bits" associated with each page in a segment are turned on/off by the process or operating system in order to control the device.Type: GrantFiled: September 28, 1989Date of Patent: May 14, 1991Assignee: Sun Microsystems, Inc.Inventors: James Van Loo, Susan Carrie, Jerald Evans, Jeffrey Spirn