Patents by Inventor Hoi Huu Vo
Hoi Huu Vo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10924525Abstract: A server computing device for inducing latency on target input streams is provided. The server computing device includes a processor configured to receive a plurality of input streams from a respective plurality of client computing devices. Each input stream includes a plurality of inputs controlling actions of respective characters in a multiplayer online software program. The processor is further configured to determine a latency of each of the input streams, identify a higher latency input stream and a lower latency input stream among the plurality of input streams, and induce a higher latency in the lower latency input stream to narrow a difference in latency between the higher latency input stream and the lower latency input stream.Type: GrantFiled: October 1, 2018Date of Patent: February 16, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Jonathan David Morrison, Eduardo A. Cuervo Laffaye, Hoi Huu Vo
-
Publication number: 20200106819Abstract: A server computing device for inducing latency on target input streams is provided. The server computing device includes a processor configured to receive a plurality of input streams from a respective plurality of client computing devices. Each input stream includes a plurality of inputs controlling actions of respective characters in a multiplayer online software program. The processor is further configured to determine a latency of each of the input streams, identify a higher latency input stream and a lower latency input stream among the plurality of input streams, and induce a higher latency in the lower latency input stream to narrow a difference in latency between the higher latency input stream and the lower latency input stream.Type: ApplicationFiled: October 1, 2018Publication date: April 2, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Jonathan David MORRISON, Eduardo A. CUERVO LAFFAYE, Hoi Huu VO
-
Patent number: 10108528Abstract: High-performance tracing can be achieved for an input program having a plurality of instructions. Techniques such as executable instruction transcription can enable execution of a plurality of instructions at a time via a run buffer. Execution information can be extracted via run buffer execution. Fidelity of execution can be preserved by executing instructions on the target processor. Other features, such as an executable extraction instruction ensemble, branch interpretation, and relative address compensation can be implemented. High quality instruction tracing can thus be achieved without the usual performance penalties.Type: GrantFiled: August 26, 2016Date of Patent: October 23, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Jay Krell, HoYuen Chau, Allan James Murphy, Danny Chen, Steven Pratschner, Hoi Huu Vo
-
Publication number: 20180060212Abstract: High-performance tracing can be achieved for an input program having a plurality of instructions. Techniques such as executable instruction transcription can enable execution of a plurality of instructions at a time via a run buffer. Execution information can be extracted via run buffer execution. Fidelity of execution can be preserved by executing instructions on the target processor. Other features, such as an executable extraction instruction ensemble, branch interpretation, and relative address compensation can be implemented. High quality instruction tracing can thus be achieved without the usual performance penalties.Type: ApplicationFiled: August 26, 2016Publication date: March 1, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Jay Krell, HoYuen Chau, Allan James Murphy, Danny Chen, Steven Pratschner, Hoi Huu Vo
-
Patent number: 9471348Abstract: Computerized methods, systems, and computer-storage media for allowing virtual machines (VMs) residing on a common physical node to fairly share network bandwidth are provided. Restrictions on resource consumption are implemented to ameliorate stressing the network bandwidth or adversely affecting the quality of service (QoS) guaranteed to tenants of the physical node. The restrictions involves providing a scheduler that dynamically controls networking bandwidth allocated to each of the VMs as a function of QoS policies. These QoS policies are enforced by controlling a volume of traffic being sent from the VMs. Controlling traffic includes depositing tokens into token-bucket queues assigned to the VMs, respectively. The tokens are consumed as packets pass through the token-bucket queues. Upon consumption, packets are held until sufficient tokens are reloaded to the token-bucket queues.Type: GrantFiled: July 1, 2013Date of Patent: October 18, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Yue Zuo, Hoyuen Chau, Hoi Huu Vo, Samer N. Arafeh, Vivek P. Divakara, Yimin Deng, Forrest Curtis Foltz, Vivek Bhanu
-
Publication number: 20130298123Abstract: Computerized methods, systems, and computer-storage media for allowing virtual machines (VMs) residing on a common physical node to fairly share network bandwidth are provided. Restrictions on resource consumption are implemented to ameliorate stressing the network bandwidth or adversely affecting the quality of service (QoS) guaranteed to tenants of the physical node. The restrictions involves providing a scheduler that dynamically controls networking bandwidth allocated to each of the VMs as a function of QoS policies. These QoS policies are enforced by controlling a volume of traffic being sent from the VMs. Controlling traffic includes depositing tokens into token-bucket queues assigned to the VMs, respectively. The tokens are consumed as packets pass through the token-bucket queues. Upon consumption, packets are held until sufficient tokens are reloaded to the token-bucket queues.Type: ApplicationFiled: July 1, 2013Publication date: November 7, 2013Inventors: YUE ZUO, HOYUEN CHAU, HOI HUU VO, SAMER N. ARAFEH, VIVEK P. DIVAKARA, YIMIN DENG, FORREST CURTIS FOLTZ, VIVEK BHANU
-
Patent number: 8477610Abstract: Computerized methods, systems, and computer-storage media for allowing virtual machines (VMs) residing on a common physical node to fairly share network bandwidth are provided. Restrictions on resource consumption are implemented to ameliorate stressing the network bandwidth or adversely affecting the quality of service (QoS) guaranteed to tenants of the physical node. The restrictions involves providing a scheduler that dynamically controls networking bandwidth allocated to each of the VMs as a function of QoS policies. These QoS policies are enforced by controlling a volume of traffic being sent from the VMs. Controlling traffic includes depositing tokens into token-bucket queues assigned to the VMs, respectively. The tokens are consumed as packets pass through the token-bucket queues. Upon consumption, packets are held until sufficient tokens are reloaded to the token-bucket queues.Type: GrantFiled: May 31, 2010Date of Patent: July 2, 2013Assignee: Microsoft CorporationInventors: Yue Zuo, HoYuen Chau, Hoi Huu Vo, Samer N. Arafeh, Vivek P. Divakara, Yimin Deng, Forrest Curtis Foltz, Vivek Bhanu
-
Publication number: 20110292792Abstract: Computerized methods, systems, and computer-storage media for allowing virtual machines (VMs) residing on a common physical node to fairly share network bandwidth are provided. Restrictions on resource consumption are implemented to ameliorate stressing the network bandwidth or adversely affecting the quality of service (QoS) guaranteed to tenants of the physical node. The restrictions involves providing a scheduler that dynamically controls networking bandwidth allocated to each of the VMs as a function of QoS policies. These QoS policies are enforced by controlling a volume of traffic being sent from the VMs. Controlling traffic includes depositing tokens into token-bucket queues assigned to the VMs, respectively. The tokens are consumed as packets pass through the token-bucket queues. Upon consumption, packets are held until sufficient tokens are reloaded to the token-bucket queues.Type: ApplicationFiled: May 31, 2010Publication date: December 1, 2011Applicant: MICROSOFT CORPORATIONInventors: Yue Zuo, HoYuen Chau, Hoi Huu Vo, Samer N. Arafeh, Vivek P. Divakara, Yimin Deng, Forrest Curtis Foltz, Vivek Bhanu
-
Patent number: 7930705Abstract: An application compatibility module is disclosed that provides compatibility between legacy binary system modules (“legacy binaries”) and a native operating system. The application compatibility module therefore allows legacy applications to execute within the native operating system, while still using their corresponding legacy binaries. The application compatibility module may provide compatibility between legacy binaries and the native operating system by translating communications between the legacy binaries and the native operating system.Type: GrantFiled: April 6, 2007Date of Patent: April 19, 2011Assignee: Microsoft CorporationInventors: Hoi Huu Vo, Samer N. Arafeh
-
Patent number: 7398276Abstract: Compression and decompression of data such as a sequential list of executable instructions (e.g., program binaries) by uniformly applying a predictive model generated from one segment of the executable list as a common predictive starting point for the other segments of the executable list. This permits random access and decompression of any segment of the executable list once a first segment (or another reference segment) of the executable list has been decompressed. This means that when executing an executable list (e.g., an executable file), a particular segment(s) of the executable list may not need to be accessed and decompressed at all if there are no instructions in that particular segment(s) that are executed.Type: GrantFiled: May 30, 2002Date of Patent: July 8, 2008Assignee: Microsoft CorporationInventors: Darko Kirovski, Milenko Drinic, Hoi Huu Vo
-
Publication number: 20080034377Abstract: An application compatibility module is disclosed that provides compatibility between legacy binary system modules (“legacy binaries”) and a native operating system. The application compatibility module therefore allows legacy applications to execute within the native operating system, while still using their corresponding legacy binaries. The application compatibility module may provide compatibility between legacy binaries and the native operating system by translating communications between the legacy binaries and the native operating system.Type: ApplicationFiled: April 6, 2007Publication date: February 7, 2008Applicant: Microsoft CorporationInventors: Hoi Huu Vo, Samer N. Arafeh
-
Patent number: 7305541Abstract: Compressing program binaries with reduced compression ratios. One or several pre-processing acts are performed before performing compression using a local sequential correlation oriented compression technology such as PPM, or one of its variants or improvements. One pre-processing act splits the binaries into several substreams that have high local sequential correlation. Such splitting takes into consideration the correlation between common fields in different instructions as well as the correlation between different fields in the same instruction. Another pre-processing reschedules binary instructions to improve the degree of local sequential correlation without affecting dependencies between instructions. Yet another pre-processing act replaces common operation codes in the instruction with a symbols from a second alphabet, thereby distinguishing between operation codes that have a particular value, and other portions of the instruction that just happen to have the same value.Type: GrantFiled: March 21, 2005Date of Patent: December 4, 2007Assignee: Microsoft CorporationInventors: Darko Kirovski, Milenko Drinic, Hoi Huu Vo
-
Patent number: 6907516Abstract: Compressing program binaries with reduced compression ratios. One or several pre-processing acts are performed before performing compression using a local sequential correlation oriented compression technology such as PPM, or one of its variants or improvements. One pre-processing act splits the binaries into several substreams that have high local sequential correlation. Such splitting takes into consideration the correlation between common fields in different instructions as well as the correlation between different fields in the same instruction. Another pre-processing reschedules binary instructions to improve the degree of local sequential correlation without affecting dependencies between instructions. Yet another pre-processing act replaces common operation codes in the instruction with a symbols from a second alphabet, thereby distinguishing between operation codes that have a particular value, and other portions of the instruction that just happen to have the same value.Type: GrantFiled: May 30, 2002Date of Patent: June 14, 2005Assignee: Microsoft CorporationInventors: Darko Kirovski, Milenko Drinic, Hoi Huu Vo
-
Publication number: 20030225997Abstract: Compressing program binaries with reduced compression ratios. One or several pre-processing acts are performed before performing compression using a local sequential correlation oriented compression technology such as PPM, or one of its variants or improvements. One pre-processing act splits the binaries into several substreams that have high local sequential correlation. Such splitting takes into consideration the correlation between common fields in different instructions as well as the correlation between different fields in the same instruction. Another pre-processing reschedules binary instructions to improve the degree of local sequential correlation without affecting dependencies between instructions. Yet another pre-processing act replaces common operation codes in the instruction with a symbols from a second alphabet, thereby distinguishing between operation codes that have a particular value, and other portions of the instruction that just happen to have the same value.Type: ApplicationFiled: May 30, 2002Publication date: December 4, 2003Inventors: Darko Kirovski, Milenko Drinic, Hoi Huu Vo
-
Publication number: 20030225775Abstract: Compression and decompression of data such as a sequential list of executable instructions (e.g., program binaries) by uniformly applying a predictive model generated from one segment of the executable list as a common predictive starting point for the other segments of the executable list. This permits random access and decompression of any segment of the executable list once a first segment (or another reference segment) of the executable list has been decompressed. This means that when executing an executable list (e.g., an executable file), a particular segment(s) of the executable list may not need to be accessed and decompressed at all if there are no instructions in that particular segment(s) that are executed.Type: ApplicationFiled: May 30, 2002Publication date: December 4, 2003Inventors: Darko Kirovski, Milenko Drinic, Hoi Huu Vo