Patents by Inventor Emad A. Omara
Emad A. Omara has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220368685Abstract: Aspects of the subject technology provide for shared experience sessions within a group communications session such as a video call. The shared experience session may be, as one example, a co-watching session in which the participants in the call watch a video together while in the call. Encrypted shared state data may be exchanged between the participant devices, with which the participant devices can provide synchronized and coordinated output of shared experience data for the shared experience session of the group communications session.Type: ApplicationFiled: March 29, 2022Publication date: November 17, 2022Inventors: Daniel B. POLLACK, Jingyao ZHANG, Jose A. LOZANO HINOJOSA, Emad OMARA, Yilmaz Can CECEN, Angus N. BURTON, Blerim CICI
-
Patent number: 9952912Abstract: A method of executing an algorithm in a parallel manner using a plurality of concurrent threads includes generating a lock-free barrier that includes a variable that stores both a total participants count and a current participants count. The total participants count indicates a total number of threads in the plurality of concurrent threads that are participating in a current phase of the algorithm, and the current participants count indicates a total number of threads in the plurality of concurrent threads that have completed the current phase. The barrier blocks the threads that have completed the current phase. The total participants count is dynamically updated during execution of the current phase of the algorithm. The generating, blocking, and dynamically updating are performed by at least one processor.Type: GrantFiled: December 30, 2014Date of Patent: April 24, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Emad Omara, John Duffy
-
Patent number: 9442827Abstract: A dataflow of a distributed application is visualized in a locally simulated execution environment. A scheduler receives a job graph which includes a graph of computational vertices that are designed to be executed on multiple distributed computer systems. The scheduler queries a graph manager to determine which computational vertices of the job graph are ready for execution in a local execution environment. The scheduler queries a cluster manager to determine the organizational topology of the distributed computer systems to simulate the determined topology in the local execution environment. The scheduler queries a data manager to determine data storage locations for each of the computational vertices indicated as being ready for execution in the local execution environment. The scheduler also indicates an instance of each computational vertex to be spawned and executed in the local execution environment based on the organizational topology and indicated data storage locations.Type: GrantFiled: March 18, 2014Date of Patent: September 13, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Massimo Mascaro, Igor Ostrovsky, Emad A. Omara
-
Patent number: 9418175Abstract: An enumerable concurrent data structure referred to as a concurrent bag is provided. The concurrent bag is accessible by concurrent threads and includes a set of local lists configured as a linked list and a dictionary. The dictionary includes an entry for each local list that identifies the thread that created the local list and the location of the local list. Each local list includes a set of data elements configured as a linked list. A global lock on the concurrent bag and local locks on each local list allow operations that involve enumeration to be performed on the concurrent bag.Type: GrantFiled: March 31, 2009Date of Patent: August 16, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Emad Omara, John J. Duffy
-
Publication number: 20150186190Abstract: A method of executing an algorithm in a parallel manner using a plurality of concurrent threads includes generating a lock-free barrier that includes a variable that stores both a total participants count and a current participants count. The total participants count indicates a total number of threads in the plurality of concurrent threads that are participating in a current phase of the algorithm, and the current participants count indicates a total number of threads in the plurality of concurrent threads that have completed the current phase. The barrier blocks the threads that have completed the current phase. The total participants count is dynamically updated during execution of the current phase of the algorithm. The generating, blocking, and dynamically updating are performed by at least one processor.Type: ApplicationFiled: December 30, 2014Publication date: July 2, 2015Inventors: Emad Omara, John Duffy
-
Patent number: 8997101Abstract: Embodiments described herein are directed to dynamically controlling the number of spins for a selected processing thread among a plurality of processing threads. A computer system tracks both the number of waiting processing threads and each thread's turn, wherein a selected thread's turn comprises the total number of waiting processing threads after the selected thread's arrival at the processor. Next, the computer system determines, based on the selected thread's turn, the number of spins that are to occur before the selected thread checks for an available thread lock. The computer system also, based on the selected thread's turn, changes the number of spins, such that the number of spins for the selected thread is a function of the number of waiting processing threads and processors in the computer system.Type: GrantFiled: March 4, 2014Date of Patent: March 31, 2015Assignee: Microsoft CorporationInventors: Emad A. Omara, John J. Duffy
-
Patent number: 8924984Abstract: A method of executing an algorithm in a parallel manner using a plurality of concurrent threads includes generating a lock-free barrier that includes a variable that stores both a total participants count and a current participants count. The total participants count indicates a total number of threads in the plurality of concurrent threads that are participating in a current phase of the algorithm, and the current participants count indicates a total number of threads in the plurality of concurrent threads that have completed the current phase. The barrier blocks the threads that have completed the current phase. The total participants count is dynamically updated during execution of the current phase of the algorithm. The generating, blocking, and dynamically updating are performed by at least one processor.Type: GrantFiled: June 26, 2009Date of Patent: December 30, 2014Assignee: Microsoft CorporationInventors: Emad Omara, John Duffy
-
Publication number: 20140201717Abstract: A dataflow of a distributed application is visualized in a locally simulated execution environment. A scheduler receives a job graph which includes a graph of computational vertices that are designed to be executed on multiple distributed computer systems. The scheduler queries a graph manager to determine which computational vertices of the job graph are ready for execution in a local execution environment. The scheduler queries a cluster manager to determine the organizational topology of the distributed computer systems to simulate the determined topology in the local execution environment. The scheduler queries a data manager to determine data storage locations for each of the computational vertices indicated as being ready for execution in the local execution environment. The scheduler also indicates an instance of each computational vertex to be spawned and executed in the local execution environment based on the organizational topology and indicated data storage locations.Type: ApplicationFiled: March 18, 2014Publication date: July 17, 2014Applicant: MICROSOFT CORPORATIONInventors: Massimo Mascaro, Igor Ostrovsky, Emad A. Omara
-
Publication number: 20140189699Abstract: Embodiments described herein are directed to dynamically controlling the number of spins for a selected processing thread among a plurality of processing threads. A computer system tracks both the number of waiting processing threads and each thread's turn, wherein a selected thread's turn comprises the total number of waiting processing threads after the selected thread's arrival at the processor. Next, the computer system determines, based the selected thread's turn, the number of spins that are to occur before the selected thread checks for an available thread lock. The computer system also, based on the selected thread's turn, changes the number of spins, such that the number of spins for the selected thread is a function of the number of waiting processing threads and processors in the computer system.Type: ApplicationFiled: March 4, 2014Publication date: July 3, 2014Inventors: Emad A. Omara, John J. Duffy
-
Patent number: 8707275Abstract: A scheduler receives a job graph which includes a graph of computational vertices that are designed to be executed on multiple distributed computer systems. The scheduler queries a graph manager to determine which computational vertices of the job graph are ready for execution in a local execution environment. The scheduler queries a cluster manager to determine the organizational topology of the distributed computer systems to simulate the determined topology in the local execution environment. The scheduler queries a data manager to determine data storage locations for each of the computational vertices indicated as being ready for execution in the local execution environment. The scheduler also indicates to a vertex spawner that an instance of each computational vertex is to be spawned in the local execution environment based on the organizational topology and indicated data storage locations, and indicates to the local execution environment that the spawned vertices are to be executed.Type: GrantFiled: September 14, 2010Date of Patent: April 22, 2014Assignee: Microsoft CorporationInventors: Massimo Mascaro, Igor Ostrovsky, Emad A. Omara
-
Patent number: 8683470Abstract: Embodiments described herein are directed to dynamically controlling the number of spins for a selected processing thread among a plurality of processing threads. A computer system tracks both the number of waiting processing threads and each thread's turn, wherein a selected thread's turn comprises the total number of waiting processing threads after the selected thread's arrival at the processor. Next, the computer system determines, based the selected thread's turn, the number of spins that are to occur before the selected thread checks for an available thread lock. The computer system also, based on the selected thread's turn, changes the number of spins, such that the number of spins for the selected thread is a function of the number of waiting processing threads and processors in the computer system.Type: GrantFiled: November 24, 2009Date of Patent: March 25, 2014Assignee: Microsoft CorporationInventors: Emad A. Omara, John J. Duffy
-
Patent number: 8326886Abstract: A method of storing per-thread, per-instance data, includes identifying a unique index value corresponding to a first instance, identifying type parameters based on the identified index value, and instantiating a generic holder object based on the identified type parameters. The generic holder object includes a thread local field configured to store per-thread data that is local to the first instance.Type: GrantFiled: January 21, 2010Date of Patent: December 4, 2012Assignee: Microsoft CorporationInventors: Stephen H. Toub, Emad Omara, John Duffy
-
Publication number: 20120066667Abstract: A scheduler receives a job graph which includes a graph of computational vertices that are designed to be executed on multiple distributed computer systems. The scheduler queries a graph manager to determine which computational vertices of the job graph are ready for execution in a local execution environment. The scheduler queries a cluster manager to determine the organizational topology of the distributed computer systems to simulate the determined topology in the local execution environment. The scheduler queries a data manager to determine data storage locations for each of the computational vertices indicated as being ready for execution in the local execution environment. The scheduler also indicates to a vertex spawner that an instance of each computational vertex is to be spawned in the local execution environment based on the organizational topology and indicated data storage locations, and indicates to the local execution environment that the spawned vertices are to be executed.Type: ApplicationFiled: September 14, 2010Publication date: March 15, 2012Applicant: MICROSOFT CORPORATIONInventors: Massimo Mascaro, Igor Ostrovsky, Emad A. Omara
-
Publication number: 20110191775Abstract: The forking of thread operations. At runtime, a task is identified as being divided into multiple subtasks to be accomplished by multiple threads (i.e., forked threads). In order to be able to verify when the forked threads have completed their task, multiple counter memory locations are set up and updated as forked threads complete. The multiple counter memory locations are evaluated in the aggregate to determine whether all of the forked threads are completed. Once the forked threads are determined to be completed, a join operation may be performed. Rather than a single memory location, multiple memory locations are used to account for thread completion. This reduces risk of thread contention.Type: ApplicationFiled: January 29, 2010Publication date: August 4, 2011Applicant: Microsoft CorporationInventors: Emad A. Omara, John J. Duffy
-
Publication number: 20110179038Abstract: A method of storing per-thread, per-instance data, includes identifying a unique index value corresponding to a first instance, identifying type parameters based on the identified index value, and instantiating a generic holder object based on the identified type parameters. The generic holder object includes a thread local field configured to store per-thread data that is local to the first instance.Type: ApplicationFiled: January 21, 2010Publication date: July 21, 2011Applicant: Microsoft CorporationInventors: Stephen H. Toub, Emad Omara, John Duffy
-
Publication number: 20110126204Abstract: Embodiments described herein are directed to dynamically controlling the number of spins for a selected processing thread among a plurality of processing threads. A computer system tracks both the number of waiting processing threads and each thread's turn, wherein a selected thread's turn comprises the total number of waiting processing threads after the selected thread's arrival at the processor. Next, the computer system determines, based the selected thread's turn, the number of spins that are to occur before the selected thread checks for an available thread lock. The computer system also, based on the selected thread's turn, changes the number of spins, such that the number of spins for the selected thread is a function of the number of waiting processing threads and processors in the computer system.Type: ApplicationFiled: November 24, 2009Publication date: May 26, 2011Applicant: MICROSOFT CORPORATIONInventors: Emad A. Omara, John J. Duffy
-
Publication number: 20100333107Abstract: A method of executing an algorithm in a parallel manner using a plurality of concurrent threads includes generating a lock-free barrier that includes a variable that stores both a total participants count and a current participants count. The total participants count indicates a total number of threads in the plurality of concurrent threads that are participating in a current phase of the algorithm, and the current participants count indicates a total number of threads in the plurality of concurrent threads that have completed the current phase. The barrier blocks the threads that have completed the current phase. The total participants count is dynamically updated during execution of the current phase of the algorithm. The generating, blocking, and dynamically updating are performed by at least one processor.Type: ApplicationFiled: June 26, 2009Publication date: December 30, 2010Applicant: Microsoft CorporationInventors: Emad Omara, John Duffy
-
Publication number: 20100250507Abstract: An enumerable concurrent data structure referred to as a concurrent bag is provided. The concurrent bag is accessible by concurrent threads and includes a set of local lists configured as a linked list and a dictionary. The dictionary includes an entry for each local list that identifies the thread that created the local list and the location of the local list. Each local list includes a set of data elements configured as a linked list. A global lock on the concurrent bag and local locks on each local list allow operations that involve enumeration to be performed on the concurrent bag.Type: ApplicationFiled: March 31, 2009Publication date: September 30, 2010Applicant: Microsoft CorporationInventors: Emad Omara, John J. Duffy