Patents by Inventor Hoi Huu Vo

Hoi Huu Vo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10924525
    Abstract: A server computing device for inducing latency on target input streams is provided. The server computing device includes a processor configured to receive a plurality of input streams from a respective plurality of client computing devices. Each input stream includes a plurality of inputs controlling actions of respective characters in a multiplayer online software program. The processor is further configured to determine a latency of each of the input streams, identify a higher latency input stream and a lower latency input stream among the plurality of input streams, and induce a higher latency in the lower latency input stream to narrow a difference in latency between the higher latency input stream and the lower latency input stream.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: February 16, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jonathan David Morrison, Eduardo A. Cuervo Laffaye, Hoi Huu Vo
  • Publication number: 20200106819
    Abstract: A server computing device for inducing latency on target input streams is provided. The server computing device includes a processor configured to receive a plurality of input streams from a respective plurality of client computing devices. Each input stream includes a plurality of inputs controlling actions of respective characters in a multiplayer online software program. The processor is further configured to determine a latency of each of the input streams, identify a higher latency input stream and a lower latency input stream among the plurality of input streams, and induce a higher latency in the lower latency input stream to narrow a difference in latency between the higher latency input stream and the lower latency input stream.
    Type: Application
    Filed: October 1, 2018
    Publication date: April 2, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jonathan David MORRISON, Eduardo A. CUERVO LAFFAYE, Hoi Huu VO
  • Patent number: 10108528
    Abstract: High-performance tracing can be achieved for an input program having a plurality of instructions. Techniques such as executable instruction transcription can enable execution of a plurality of instructions at a time via a run buffer. Execution information can be extracted via run buffer execution. Fidelity of execution can be preserved by executing instructions on the target processor. Other features, such as an executable extraction instruction ensemble, branch interpretation, and relative address compensation can be implemented. High quality instruction tracing can thus be achieved without the usual performance penalties.
    Type: Grant
    Filed: August 26, 2016
    Date of Patent: October 23, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jay Krell, HoYuen Chau, Allan James Murphy, Danny Chen, Steven Pratschner, Hoi Huu Vo
  • Publication number: 20180060212
    Abstract: High-performance tracing can be achieved for an input program having a plurality of instructions. Techniques such as executable instruction transcription can enable execution of a plurality of instructions at a time via a run buffer. Execution information can be extracted via run buffer execution. Fidelity of execution can be preserved by executing instructions on the target processor. Other features, such as an executable extraction instruction ensemble, branch interpretation, and relative address compensation can be implemented. High quality instruction tracing can thus be achieved without the usual performance penalties.
    Type: Application
    Filed: August 26, 2016
    Publication date: March 1, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jay Krell, HoYuen Chau, Allan James Murphy, Danny Chen, Steven Pratschner, Hoi Huu Vo
  • Patent number: 9471348
    Abstract: Computerized methods, systems, and computer-storage media for allowing virtual machines (VMs) residing on a common physical node to fairly share network bandwidth are provided. Restrictions on resource consumption are implemented to ameliorate stressing the network bandwidth or adversely affecting the quality of service (QoS) guaranteed to tenants of the physical node. The restrictions involves providing a scheduler that dynamically controls networking bandwidth allocated to each of the VMs as a function of QoS policies. These QoS policies are enforced by controlling a volume of traffic being sent from the VMs. Controlling traffic includes depositing tokens into token-bucket queues assigned to the VMs, respectively. The tokens are consumed as packets pass through the token-bucket queues. Upon consumption, packets are held until sufficient tokens are reloaded to the token-bucket queues.
    Type: Grant
    Filed: July 1, 2013
    Date of Patent: October 18, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yue Zuo, Hoyuen Chau, Hoi Huu Vo, Samer N. Arafeh, Vivek P. Divakara, Yimin Deng, Forrest Curtis Foltz, Vivek Bhanu
  • Publication number: 20130298123
    Abstract: Computerized methods, systems, and computer-storage media for allowing virtual machines (VMs) residing on a common physical node to fairly share network bandwidth are provided. Restrictions on resource consumption are implemented to ameliorate stressing the network bandwidth or adversely affecting the quality of service (QoS) guaranteed to tenants of the physical node. The restrictions involves providing a scheduler that dynamically controls networking bandwidth allocated to each of the VMs as a function of QoS policies. These QoS policies are enforced by controlling a volume of traffic being sent from the VMs. Controlling traffic includes depositing tokens into token-bucket queues assigned to the VMs, respectively. The tokens are consumed as packets pass through the token-bucket queues. Upon consumption, packets are held until sufficient tokens are reloaded to the token-bucket queues.
    Type: Application
    Filed: July 1, 2013
    Publication date: November 7, 2013
    Inventors: YUE ZUO, HOYUEN CHAU, HOI HUU VO, SAMER N. ARAFEH, VIVEK P. DIVAKARA, YIMIN DENG, FORREST CURTIS FOLTZ, VIVEK BHANU
  • Patent number: 8477610
    Abstract: Computerized methods, systems, and computer-storage media for allowing virtual machines (VMs) residing on a common physical node to fairly share network bandwidth are provided. Restrictions on resource consumption are implemented to ameliorate stressing the network bandwidth or adversely affecting the quality of service (QoS) guaranteed to tenants of the physical node. The restrictions involves providing a scheduler that dynamically controls networking bandwidth allocated to each of the VMs as a function of QoS policies. These QoS policies are enforced by controlling a volume of traffic being sent from the VMs. Controlling traffic includes depositing tokens into token-bucket queues assigned to the VMs, respectively. The tokens are consumed as packets pass through the token-bucket queues. Upon consumption, packets are held until sufficient tokens are reloaded to the token-bucket queues.
    Type: Grant
    Filed: May 31, 2010
    Date of Patent: July 2, 2013
    Assignee: Microsoft Corporation
    Inventors: Yue Zuo, HoYuen Chau, Hoi Huu Vo, Samer N. Arafeh, Vivek P. Divakara, Yimin Deng, Forrest Curtis Foltz, Vivek Bhanu
  • Publication number: 20110292792
    Abstract: Computerized methods, systems, and computer-storage media for allowing virtual machines (VMs) residing on a common physical node to fairly share network bandwidth are provided. Restrictions on resource consumption are implemented to ameliorate stressing the network bandwidth or adversely affecting the quality of service (QoS) guaranteed to tenants of the physical node. The restrictions involves providing a scheduler that dynamically controls networking bandwidth allocated to each of the VMs as a function of QoS policies. These QoS policies are enforced by controlling a volume of traffic being sent from the VMs. Controlling traffic includes depositing tokens into token-bucket queues assigned to the VMs, respectively. The tokens are consumed as packets pass through the token-bucket queues. Upon consumption, packets are held until sufficient tokens are reloaded to the token-bucket queues.
    Type: Application
    Filed: May 31, 2010
    Publication date: December 1, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Yue Zuo, HoYuen Chau, Hoi Huu Vo, Samer N. Arafeh, Vivek P. Divakara, Yimin Deng, Forrest Curtis Foltz, Vivek Bhanu
  • Patent number: 7930705
    Abstract: An application compatibility module is disclosed that provides compatibility between legacy binary system modules (“legacy binaries”) and a native operating system. The application compatibility module therefore allows legacy applications to execute within the native operating system, while still using their corresponding legacy binaries. The application compatibility module may provide compatibility between legacy binaries and the native operating system by translating communications between the legacy binaries and the native operating system.
    Type: Grant
    Filed: April 6, 2007
    Date of Patent: April 19, 2011
    Assignee: Microsoft Corporation
    Inventors: Hoi Huu Vo, Samer N. Arafeh
  • Patent number: 7398276
    Abstract: Compression and decompression of data such as a sequential list of executable instructions (e.g., program binaries) by uniformly applying a predictive model generated from one segment of the executable list as a common predictive starting point for the other segments of the executable list. This permits random access and decompression of any segment of the executable list once a first segment (or another reference segment) of the executable list has been decompressed. This means that when executing an executable list (e.g., an executable file), a particular segment(s) of the executable list may not need to be accessed and decompressed at all if there are no instructions in that particular segment(s) that are executed.
    Type: Grant
    Filed: May 30, 2002
    Date of Patent: July 8, 2008
    Assignee: Microsoft Corporation
    Inventors: Darko Kirovski, Milenko Drinic, Hoi Huu Vo
  • Publication number: 20080034377
    Abstract: An application compatibility module is disclosed that provides compatibility between legacy binary system modules (“legacy binaries”) and a native operating system. The application compatibility module therefore allows legacy applications to execute within the native operating system, while still using their corresponding legacy binaries. The application compatibility module may provide compatibility between legacy binaries and the native operating system by translating communications between the legacy binaries and the native operating system.
    Type: Application
    Filed: April 6, 2007
    Publication date: February 7, 2008
    Applicant: Microsoft Corporation
    Inventors: Hoi Huu Vo, Samer N. Arafeh
  • Patent number: 7305541
    Abstract: Compressing program binaries with reduced compression ratios. One or several pre-processing acts are performed before performing compression using a local sequential correlation oriented compression technology such as PPM, or one of its variants or improvements. One pre-processing act splits the binaries into several substreams that have high local sequential correlation. Such splitting takes into consideration the correlation between common fields in different instructions as well as the correlation between different fields in the same instruction. Another pre-processing reschedules binary instructions to improve the degree of local sequential correlation without affecting dependencies between instructions. Yet another pre-processing act replaces common operation codes in the instruction with a symbols from a second alphabet, thereby distinguishing between operation codes that have a particular value, and other portions of the instruction that just happen to have the same value.
    Type: Grant
    Filed: March 21, 2005
    Date of Patent: December 4, 2007
    Assignee: Microsoft Corporation
    Inventors: Darko Kirovski, Milenko Drinic, Hoi Huu Vo
  • Patent number: 6907516
    Abstract: Compressing program binaries with reduced compression ratios. One or several pre-processing acts are performed before performing compression using a local sequential correlation oriented compression technology such as PPM, or one of its variants or improvements. One pre-processing act splits the binaries into several substreams that have high local sequential correlation. Such splitting takes into consideration the correlation between common fields in different instructions as well as the correlation between different fields in the same instruction. Another pre-processing reschedules binary instructions to improve the degree of local sequential correlation without affecting dependencies between instructions. Yet another pre-processing act replaces common operation codes in the instruction with a symbols from a second alphabet, thereby distinguishing between operation codes that have a particular value, and other portions of the instruction that just happen to have the same value.
    Type: Grant
    Filed: May 30, 2002
    Date of Patent: June 14, 2005
    Assignee: Microsoft Corporation
    Inventors: Darko Kirovski, Milenko Drinic, Hoi Huu Vo
  • Publication number: 20030225997
    Abstract: Compressing program binaries with reduced compression ratios. One or several pre-processing acts are performed before performing compression using a local sequential correlation oriented compression technology such as PPM, or one of its variants or improvements. One pre-processing act splits the binaries into several substreams that have high local sequential correlation. Such splitting takes into consideration the correlation between common fields in different instructions as well as the correlation between different fields in the same instruction. Another pre-processing reschedules binary instructions to improve the degree of local sequential correlation without affecting dependencies between instructions. Yet another pre-processing act replaces common operation codes in the instruction with a symbols from a second alphabet, thereby distinguishing between operation codes that have a particular value, and other portions of the instruction that just happen to have the same value.
    Type: Application
    Filed: May 30, 2002
    Publication date: December 4, 2003
    Inventors: Darko Kirovski, Milenko Drinic, Hoi Huu Vo
  • Publication number: 20030225775
    Abstract: Compression and decompression of data such as a sequential list of executable instructions (e.g., program binaries) by uniformly applying a predictive model generated from one segment of the executable list as a common predictive starting point for the other segments of the executable list. This permits random access and decompression of any segment of the executable list once a first segment (or another reference segment) of the executable list has been decompressed. This means that when executing an executable list (e.g., an executable file), a particular segment(s) of the executable list may not need to be accessed and decompressed at all if there are no instructions in that particular segment(s) that are executed.
    Type: Application
    Filed: May 30, 2002
    Publication date: December 4, 2003
    Inventors: Darko Kirovski, Milenko Drinic, Hoi Huu Vo