Patents by Inventor Bor-Ming Hsieh
Bor-Ming Hsieh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20160210174Abstract: In one example, a processor may schedule a processing thread for execution based on a dynamic scheduling priority. A memory may associate a scheduling priority with a processing thread. A scheduler may adjust the scheduling priority based on a time frame. The scheduler may set a processing schedule for execution of the processing thread based on a scheduling parameter set including the scheduling priority. At least one processing core may execute the processing thread based on the processing schedule.Type: ApplicationFiled: January 15, 2015Publication date: July 21, 2016Applicant: MICROSOFT CORPORATIONInventors: Bor-Ming Hsieh, Glenn F. Evans, Neeraj Singh, Abhishek Sagar
-
Patent number: 8775737Abstract: A method of managing memory of a computing device includes providing a first memory that can be allocated as cache memory or that can be used by a computing device component. A first memory segment can be allocated as cache memory in response to a cache miss. Cache size can be dynamically increased by allocating additional first memory segments as cache memory in response to subsequent cache misses. Cache memory size can be dynamically decreased by reallocating first memory cache segments for use by computing device components. The cache memory can be a cache for a second memory accessible to the computing device. The computing device can be a mobile device. The first memory can be an embedded memory and the second memory can comprise embedded, removable or external memory, or any combination thereof. The maximum size of the cache memory scales with the size of the first memory.Type: GrantFiled: December 2, 2010Date of Patent: July 8, 2014Assignee: Microsoft CorporationInventors: Bor-Ming Hsieh, Andrew M. Rogers
-
Patent number: 8561073Abstract: Embodiments of the invention intelligently associate processes with core processors in a multi-core processor. The core processors are asymmetrical in that the core processors support different features or provide different resources. The features or resources are published by the core processors or otherwise identified (e.g., via a query). Responsive to a request to execute an instruction associated with a thread, one of the core processors is selected based on the resource or feature supporting execution of the instruction. The thread is assigned to the selected core processor such that the selected core processor executes the instruction and subsequent instructions from the assigned thread. In some embodiments, the resource or feature is emulated until an activity limit is reached upon which the thread assignment occurs.Type: GrantFiled: September 19, 2008Date of Patent: October 15, 2013Assignee: Microsoft CorporationInventors: Yadhu Nandh Gopalan, John Mark Miller, Bor-Ming Hsieh
-
Patent number: 8544014Abstract: Scheduling of threads in a multi-core system is performed using per-processor queues for each core to hold threads with fixed affinity for each core. Cores are configured to pick the highest priority thread among the global run queue, which holds threads without affinity, and their respective per-processor queue. To select between two threads with same priority on both queues, the threads are assigned sequence numbers based on their time of arrival. The sequence numbers may be weighted for either queue to prioritize one over the other.Type: GrantFiled: July 24, 2007Date of Patent: September 24, 2013Assignee: Microsoft CorporationInventors: Yadhu Gopalan, Bor-ming Hsieh, Mark Miller
-
Patent number: 8397290Abstract: Embodiments provide a security infrastructure that may be configured to run on top of an existing operating system to control what resources can be accessed by an applications and what APIs an application can call. Security decisions are made by taking into account both the current thread's identity and the current thread's call chain context to enable minimal privilege by default. The current thread context is captured and a copy of it is created to be used to perform security checks asynchronously. Every thread in the system has an associated identity. To obtain access to a particular resource, all the callers on the current thread are analyzed to make sure that each caller and thread has access to that resource. Only when each caller and thread has access to that resource is the caller given access to that resource.Type: GrantFiled: June 27, 2008Date of Patent: March 12, 2013Assignee: Microsoft CorporationInventors: Neil Laurence Coles, Scott Randall Shell, Upender Reddy Sandadi, Angelo Renato Vals, Matthew G. Lyons, Christopher Ross Jordan, Andrew Rogers, Yadhu Gopalan, Bor-Ming Hsieh
-
Patent number: 8327363Abstract: Scheduling of threads in a multi-core system running various legacy applications along with multi-core compatible applications is configured such that threads from older single thread applications are assigned fixed affinity. Threads from multi-thread/single core applications are scheduled such that one thread at a time is made available to the cores based on the thread priority preventing conflicts and increasing resource efficiency. Threads from multi-core compatible applications are handled regularly.Type: GrantFiled: July 24, 2007Date of Patent: December 4, 2012Assignee: Microsoft CorporationInventors: Yadhu Gopalan, Bor-ming Hsieh, Mark Miller
-
Publication number: 20120144092Abstract: A method of managing memory of a computing device includes providing a first memory that can be allocated as cache memory or that can be used by a computing device component. A first memory segment can be allocated as cache memory in response to a cache miss. Cache size can be dynamically increased by allocating additional first memory segments as cache memory in response to subsequent cache misses. Cache memory size can be dynamically decreased by reallocating first memory cache segments for use by computing device components. The cache memory can be a cache for a second memory accessible to the computing device. The computing device can be a mobile device. The first memory can be an embedded memory and the second memory can comprise embedded, removable or external memory, or any combination thereof. The maximum size of the cache memory scales with the size of the first memory.Type: ApplicationFiled: December 2, 2010Publication date: June 7, 2012Applicant: Microsoft CorporationInventors: Bor-Ming Hsieh, Andrew M. Rogers
-
Patent number: 8069192Abstract: A computing device includes a processor, a storage device having an executable file, and a file system for executing the file in place on the storage device on behalf of the processor. The file is divided into multiple non-contiguous fragments on the storage device, and the computing device further includes a virtual address translator interposed between the processor and the storage device for translating between physical addresses of the fragments of the file on the storage device and corresponding virtual addresses employed by the processor.Type: GrantFiled: December 1, 2004Date of Patent: November 29, 2011Assignee: Microsoft CorporationInventors: Andrew Michael Rogers, Yadhu N. Gopalan, Bor-Ming Hsieh, David Fischer Kelley
-
Patent number: 7721268Abstract: A method of acquiring software profile information of a target software application includes receiving a programmed interrupt while executing an application in a computer system, servicing the interrupt such that call stack information is acquired and processing the call stack information to produce statistical information concerning function calls. The call stack information includes program counter and other information which is derived from the target application as well as operating system. Some or all of the call stack information may be recorded. The statistical information includes statistics concerning the number of samples wherein a series of functions calls are included in the call stack information and the number of samples wherein a set of function calls are at the top of the call stack information.Type: GrantFiled: October 1, 2004Date of Patent: May 18, 2010Assignee: Microsoft CorporationInventors: Susan Loh, Amjad Hussain, Bor-Ming Hsieh, John Robert Eldridge, Todd W. Squire
-
Patent number: 7716647Abstract: A method of acquiring software profile information of a target software application includes Monitoring an application program for system calls, detecting a system call of interest to the user, acquiring stack information, and processing the call stack information to produce statistical information concerning function calls. The call stack information includes program counter and other information which is derived from the target application as well as operating system. The call stack information may be recorded. The statistical information includes statistics concerning the number of samples that any one function call is at a top of the call stack information, the number of samples that a series of functions calls are included in the call stack information, and the number of samples that a set of function calls are at the top of the call stack information.Type: GrantFiled: October 1, 2004Date of Patent: May 11, 2010Assignee: Microsoft CorporationInventors: Susan Loh, Amjad Hussain, Bor-Ming Hsieh, John Robert Eldridge, Todd W. Squire
-
Publication number: 20100077185Abstract: Embodiments of the invention intelligently associate processes with core processors in a multi-core processor. The core processors are asymmetrical in that the core processors support different features or provide different resources. The features or resources are published by the core processors or otherwise identified (e.g., via a query). Responsive to a request to execute an instruction associated with a thread, one of the core processors is selected based on the resource or feature supporting execution of the instruction. The thread is assigned to the selected core processor such that the selected core processor executes the instruction and subsequent instructions from the assigned thread. In some embodiments, the resource or feature is emulated until an activity limit is reached upon which the thread assignment occurs.Type: ApplicationFiled: September 19, 2008Publication date: March 25, 2010Applicant: Microsoft CorporationInventors: Yadhu Nandh Gopalan, John Mark Miller, Bor-Ming Hsieh
-
Publication number: 20090328180Abstract: Embodiments provide a security infrastructure that may be configured to run on top of an existing operating system to control what resources can be accessed by an applications and what APIs an application can call. Security decisions are made by taking into account both the current thread's identity and the current thread's call chain context to enable minimal privilege by default. The current thread context is captured and a copy of it is created to be used to perform security checks asynchronously. Every thread in the system has an associated identity. To obtain access to a particular resource, all the callers on the current thread are analyzed to make sure that each caller and thread has access to that resource. Only when each caller and thread has access to that resource is the caller given access to that resource.Type: ApplicationFiled: June 27, 2008Publication date: December 31, 2009Applicant: Microsoft CorporationInventors: Neil Laurence Coles, Scott Randall Shell, Upender Sandadi, Angelo Renato Vals, Matthew G. Lyons, Christopher Ross Jordan, Andrew Rogers, Yadhu Gopalan, Bor-Ming Hsieh
-
Publication number: 20090031318Abstract: Scheduling of threads in a multi-core system running various legacy applications along with multi-core compatible applications is configured such that threads from older single thread applications are assigned fixed affinity. Threads from multi-thread/single core applications are scheduled such that one thread at a time is made available to the cores based on the thread priority preventing conflicts and increasing resource efficiency. Threads from multi-core compatible applications are handled regularly.Type: ApplicationFiled: July 24, 2007Publication date: January 29, 2009Applicant: Microsoft CorporationInventors: YADHU GOPALAN, Bor-ming Hsieh, Mark Miller
-
Publication number: 20090031317Abstract: Scheduling of threads in a multi-core system is performed using per-processor queues for each core to hold threads with fixed affinity for each core. Cores are configured to pick the highest priority thread among the global run queue, which holds threads without affinity, and their respective per-processor queue. To select between two threads with same priority on both queues, the threads are assigned sequence numbers based on their time of arrival. The sequence numbers may be weighted for either queue to prioritize one over the other.Type: ApplicationFiled: July 24, 2007Publication date: January 29, 2009Applicant: Microsoft CorporationInventors: YADHU GOPALAN, Bor-ming Hsieh, Mark Miller
-
Patent number: 7302684Abstract: Various implementations of the described subject associate a plurality of threads that are sorted based on thread priority with a run queue in a deterministic amount of time. The run queue includes a first plurality of threads, which are sorted based on thread priority. The second plurality of threads is associated with the run queue in a bounded, or deterministic amount of time that is independent of the number of threads in the associated second plurality. Thus, the various implementations of the described subject matter allow an operating system to schedule other threads for execution within deterministic/predetermined time parameters.Type: GrantFiled: June 18, 2001Date of Patent: November 27, 2007Assignee: Microsoft CorporationInventor: Bor-Ming Hsieh
-
Patent number: 7210146Abstract: Various implementations of the described subject matter provide for the management of a multi-dimensional sleep queue, such that a group of threads with a same wake-up time are removed from the multi-dimensional sleep queue in a deterministic amount of time that is independent of the number of threads in the removed group. This is significant because it allows an operating system to schedule other threads for execution within deterministic/predetermined time parameters. Moreover, the described subject matter also provides for inserting new threads into the multi-dimensional sleep queue in a manner that allows other processes to execute during the thread insertion process.Type: GrantFiled: June 18, 2001Date of Patent: April 24, 2007Assignee: Microsoft CorporationInventor: Bor-Ming Hsieh
-
Publication number: 20070076475Abstract: A system that determines where a particular XIP component is stored on a medium and loads the component into RAM for execution, providing the ability to demand page specific components at will from storage media, frees up working RAM on memory constrained devices. A Binary File System uses a generic block driver component that loads the XIP code from a block based storage medium. Features of the file system include the ability to load pre-“fixed up” components from a block based device.. The invention thus allows an operating system to load code that was previously Executed In Place (XIP) from a block-oriented device.Type: ApplicationFiled: October 9, 2006Publication date: April 5, 2007Applicant: Microsoft CorporationInventors: Michael Malueg, Larry Morris, Bor-Ming Hsieh, Yadhu Gopalan
-
Patent number: 7120730Abstract: A system that determines where a particular XIP component is stored on a medium and loads the component into RAM for execution, providing the ability to demand page specific components at will from storage media, frees up working RAM on memory constrained devices. A Binary File System uses a generic block driver component that loads the XIP code from a block based storage medium. Features of the file system include the ability to load pre-“fixed up” components from a block based device. The invention thus allows an operating system to load code that was previously Executed In Place (XIP) from a block-oriented device.Type: GrantFiled: December 19, 2005Date of Patent: October 10, 2006Assignee: Microsft CorporationInventors: Michael D. Maleug, Larry Alan Morris, Bor-Ming Hsieh, Yadhu N. Gopalan
-
Publication number: 20060101194Abstract: A system that determines where a particular XIP component is stored on a medium and loads the component into RAM for execution, providing the ability to demand page specific components at will from storage media, frees up working RAM on memory constrained devices. A Binary File System uses a generic block driver component that loads the XIP code from a block based storage medium. Features of the file system include the ability to load pre-“fixed up” components from a block based device. The invention thus allows an operating system to load code that was previously Executed In Place (XIP) from a block-oriented device.Type: ApplicationFiled: December 19, 2005Publication date: May 11, 2006Applicant: Microsoft CorporationInventors: Michael Malueg, Larry Morris, Bor-Ming Hsieh, Yadhu Gopalan
-
Publication number: 20060092846Abstract: A method of acquiring software profile information of a target software application includes Monitoring an application program for system calls, detecting a system call of interest to the user, acquiring stack information, and processing the call stack information to produce statistical information concerning function calls. The call stack information includes program counter and other information which is derived from the target application as well as operating system. The call stack information may be recorded. The statistical information includes statistics concerning the number of samples that any one function call is at a top of the call stack information, the number of samples that a series of functions calls are included in the call stack information, and the number of samples that a set of function calls are at the top of the call stack information.Type: ApplicationFiled: October 1, 2004Publication date: May 4, 2006Applicant: Microsoft CorporationInventors: Susan Loh, Amjad Hussain, Bor-Ming Hsieh, John Eldridge, Todd Squire