Patents by Inventor David Alexander Butler
David Alexander Butler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10409381Abstract: Aspects relate to detecting gestures that relate to a desired action, wherein the detected gestures are common across users and/or devices within a surface computing environment. Inferred intentions and goals based on context, history, affordances, and objects are employed to interpret gestures. Where there is uncertainty in intention of the gestures for a single device or across multiple devices, independent or coordinated communication of uncertainty or engagement of users through signaling and/or information gathering can occur.Type: GrantFiled: August 10, 2015Date of Patent: September 10, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Meredith June Morris, Eric J. Horvitz, Andrew David Wilson, F. David Jones, Stephen E. Hodges, Kenneth P. Hinckley, David Alexander Butler, Ian M. Sands, V. Kevin Russ, Hrvoje Benko, Shawn R. LeProwse, Shahram Izadi, William Ben Kunz
-
Patent number: 10234941Abstract: A wearable sensor for tracking articulated body parts is described such as a wrist-worn device which enables 3D tracking of fingers and optionally also the arm and hand without the need to wear a glove or markers on the hand. In an embodiment a camera captures images of an articulated part of a body of a wearer of the device and an articulated model of the body part is tracked in real time to enable gesture-based control of a separate computing device such as a smart phone, laptop computer or other computing device. In examples the device has a structured illumination source and a diffuse illumination source for illuminating the articulated body part.Type: GrantFiled: October 4, 2012Date of Patent: March 19, 2019Assignee: Microsoft Technology Licensing, LLCInventors: David Kim, Shahram Izadi, Otmar Hilliges, David Alexander Butler, Stephen Hodges, Patrick Luke Olivier, Jiawen Chen, Iason Oikonomidis
-
Patent number: 10049458Abstract: Systems and methods for reducing interference between multiple infra-red depth cameras are described. In an embodiment, the system comprises multiple infra-red sources, each of which projects a structured light pattern into the environment. A controller is used to control the sources in order to reduce the interference caused by overlapping light patterns. Various methods are described including: cycling between the different sources, where the cycle used may be fixed or may change dynamically based on the scene detected using the cameras; setting the wavelength of each source so that overlapping patterns are at different wavelengths; moving source-camera pairs in independent motion patterns; and adjusting the shape of the projected light patterns to minimize overlap. These methods may also be combined in any way. In another embodiment, the system comprises a single source and a mirror system is used to cast the projected structured light pattern around the environment.Type: GrantFiled: January 20, 2016Date of Patent: August 14, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Stephen Edward Hodges, David Alexander Butler, Andrew Fitzgibbon, Pushmeet Kohli
-
Patent number: 9891704Abstract: Augmented reality with direct user interaction is described. In one example, an augmented reality system comprises a user-interaction region, a camera that captures images of an object in the user-interaction region, and a partially transparent display device which combines a virtual environment with a view of the user-interaction region, so that both are visible at the same time to a user. A processor receives the images, tracks the object's movement, calculates a corresponding movement within the virtual environment, and updates the virtual environment based on the corresponding movement. In another example, a method of direct interaction in an augmented reality system comprises generating a virtual representation of the object having the corresponding movement, and updating the virtual environment so that the virtual representation interacts with virtual objects in the virtual environment. From the user's perspective, the object directly interacts with the virtual objects.Type: GrantFiled: December 26, 2016Date of Patent: February 13, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Otmar Hilliges, David Kim, Shahram Izadi, David Molyneaux, Stephen Edward Hodges, David Alexander Butler
-
Publication number: 20180003990Abstract: An image orientation system is provided wherein images (rays of lights) are projected to a user based on the user's field of view or viewing angle. As the rays of light are projected, streams of air can be produced that bend or focus the rays of light toward the user's field of view. The streams of air can be cold air, hot air, or combinations thereof. Further, an image receiver can be utilized to receive the produced image/rays of light directly in line with the user's field of view. The image receiver can be a wearable device, such as a head mounted display.Type: ApplicationFiled: May 6, 2013Publication date: January 4, 2018Inventors: Steven N. Bathiche, Hrvoje Benko, Stephen E. Hodges, Shahram Izadi, David Alexander Butler, William Ben Kunz, Shawn R. LeProwse
-
Patent number: 9851809Abstract: User interface control using a keyboard is described. In an embodiment, a user interface displayed on a display device is controlled using a computer connected to a keyboard. The keyboard has a plurality of alphanumeric keys that can be used for text entry. The computer receives data comprising a sequence of key-presses from the keyboard, and generates for each key-press a physical location on the keyboard. The relative physical locations of the key-presses are compared to calculate a movement path over the keyboard. The movement path describes the path of a user's digit over the keyboard. The movement path is mapped to a sequence of coordinates in the user interface, and the movement of an object displayed in the user interface is controlled in accordance with the sequence of coordinates.Type: GrantFiled: March 14, 2016Date of Patent: December 26, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Harper LaFave, Stephen Hodges, James Scott, Shahram Izadi, David Molyneaux, Nicolas Villar, David Alexander Butler, Mike Hazas
-
Patent number: 9703398Abstract: A pointing device using proximity sensing is described. In an embodiment, a pointing device comprises a movement sensor and a proximity sensor. The movement sensor generates a first data sequence relating to sensed movement of the pointing device relative to a surface. The proximity sensor generates a second data sequence relating to sensed movement relative to the pointing device of one or more objects in proximity to the pointing device. In embodiments, data from the movement sensor of the pointing device is read and the movement of the pointing device relative to the surface is determined. Data from the proximity sensor is also read, and a sequence of sensor images of one or more objects in proximity to the pointing device are generated. The sensor images are analyzed to determine the movement of the one or more objects relative to the pointing device.Type: GrantFiled: June 16, 2009Date of Patent: July 11, 2017Assignee: Microsoft Technology Licensing, LLCInventors: David Alexander Butler, Nicolas Villar, John Helmes, Shahram Izadi, Stephen E. Hodges, Daniel Rosenfeld, Hrvoje Benko
-
Patent number: 9636871Abstract: 3D printing may be optimized by segmenting input jobs and/or combining parts of input jobs together. In an embodiment, a user-defined metric is received associated with each input job and this is used in scheduling input jobs to optimize latency and/or throughput of the 3D printing process, along with the printing envelope and other characteristics of the 3D printers used. In various embodiments, the scheduling may comprise dividing a 3D object into a number of parts and then scheduling these parts separately and/or combining 3D objects, or parts of 3D objects, from various input jobs to be printed at the same time on the same 3D printer. In various embodiments, the scheduling is repeated when a new input job is received and changes made during printing. In various embodiments, a user may submit an updated version of an input job which is already in the process of being printed.Type: GrantFiled: August 21, 2013Date of Patent: May 2, 2017Assignee: Microsoft Technology Licensing, LLCInventors: David Alexander Butler, Nicolas Villar, James W. Scott, Stephen E. Hodges
-
Publication number: 20170109938Abstract: Augmented reality with direct user interaction is described. In one example, an augmented reality system comprises a user-interaction region, a camera that captures images of an object in the user-interaction region, and a partially transparent display device which combines a virtual environment with a view of the user-interaction region, so that both are visible at the same time to a user. A processor receives the images, tracks the object's movement, calculates a corresponding movement within the virtual environment, and updates the virtual environment based on the corresponding movement. In another example, a method of direct interaction in an augmented reality system comprises generating a virtual representation of the object having the corresponding movement, and updating the virtual environment so that the virtual representation interacts with virtual objects in the virtual environment. From the user's perspective, the object directly interacts with the virtual objects.Type: ApplicationFiled: December 26, 2016Publication date: April 20, 2017Inventors: Otmar Hilliges, David Kim, Shahram Izadi, David Moiyneaux, Stephen Edward Hodges, David Alexander Butler
-
Patent number: 9529424Abstract: Augmented reality with direct user interaction is described. In one example, an augmented reality system comprises a user-interaction region, a camera that captures images of an object in the user-interaction region, and a partially transparent display device which combines a virtual environment with a view of the user-interaction region, so that both are visible at the same time to a user. A processor receives the images, tracks the object's movement, calculates a corresponding movement within the virtual environment, and updates the virtual environment based on the corresponding movement. In another example, a method of direct interaction in an augmented reality system comprises generating a virtual representation of the object having the corresponding movement, and updating the virtual environment so that the virtual representation interacts with virtual objects in the virtual environment. From the user's perspective, the object directly interacts with the virtual objects.Type: GrantFiled: November 5, 2010Date of Patent: December 27, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Otmar Hilliges, David Kim, Shahram Izadi, David Molyneaux, Stephen Edward Hodges, David Alexander Butler
-
Patent number: 9513798Abstract: Indirect multi-touch interaction is described. In an embodiment, a user interface is controlled using a cursor and a touch region comprising a representation of one or more digits of a user. The cursor and the touch region are moved together in the user interface in accordance with data received from a cursor control device, such that the relative location of the touch region and the cursor is maintained. The representations of the digits of the user are moved in the touch region in accordance with data describing movement of the user's digits. In another embodiment, a user interface is controlled in a first mode of operation using an aggregate cursor, and switched to a second mode of operation in which the aggregate cursor is divided into separate portions, each of which can be independently controlled by the user.Type: GrantFiled: October 1, 2009Date of Patent: December 6, 2016Assignee: Microsoft Technology Licensing, LLCInventors: John Helmes, Nicolas Villar, Hrvoje Benko, Shahram Izadi, Daniel Rosenfeld, Stephen Hodges, David Alexander Butler, Xiang Cao, Richard Banks
-
Publication number: 20160195936Abstract: User interface control using a keyboard is described. In an embodiment, a user interface displayed on a display device is controlled using a computer connected to a keyboard. The keyboard has a plurality of alphanumeric keys that can be used for text entry. The computer receives data comprising a sequence of key-presses from the keyboard, and generates for each key-press a physical location on the keyboard. The relative physical locations of the key-presses are compared to calculate a movement path over the keyboard. The movement path describes the path of a user's digit over the keyboard. The movement path is mapped to a sequence of coordinates in the user interface, and the movement of an object displayed in the user interface is controlled in accordance with the sequence of coordinates.Type: ApplicationFiled: March 14, 2016Publication date: July 7, 2016Inventors: Harper LaFave, Stephen Hodges, James Scott, Shahram Izadi, David Molyneaux, Nicolas Villar, David Alexander Butler, Mike Hazas
-
Publication number: 20160163054Abstract: Systems and methods for reducing interference between multiple infra-red depth cameras are described. In an embodiment, the system comprises multiple infra-red sources, each of which projects a structured light pattern into the environment. A controller is used to control the sources in order to reduce the interference caused by overlapping light patterns. Various methods are described including: cycling between the different sources, where the cycle used may be fixed or may change dynamically based on the scene detected using the cameras; setting the wavelength of each source so that overlapping patterns are at different wavelengths; moving source-camera pairs in independent motion patterns; and adjusting the shape of the projected light patterns to minimize overlap. These methods may also be combined in any way. In another embodiment, the system comprises a single source and a mirror system is used to cast the projected structured light pattern around the environment.Type: ApplicationFiled: January 20, 2016Publication date: June 9, 2016Inventors: Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Stephen Edward Hodges, David Alexander Butler, Andrew Fitzgibbon, Pushmeet Kohli
-
Patent number: 9292194Abstract: User interface control using a keyboard is described. In an embodiment, a user interface displayed on a display device is controlled using a computer connected to a keyboard. The keyboard has a plurality of alphanumeric keys that can be used for text entry. The computer receives data comprising a sequence of key-presses from the keyboard, and generates for each key-press a physical location on the keyboard. The relative physical locations of the key-presses are compared to calculate a movement path over the keyboard. The movement path describes the path of a user's digit over the keyboard. The movement path is mapped to a sequence of coordinates in the user interface, and the movement of an object displayed in the user interface is controlled in accordance with the sequence of coordinates.Type: GrantFiled: June 24, 2013Date of Patent: March 22, 2016Assignee: Microsoft Technology Licensing, LCInventors: Harper LaFave, Stephen Hodges, James Scott, Shahram Izadi, David Molyneaux, Nicolas Villar, David Alexander Butler, Mike Hazas
-
Patent number: 9247238Abstract: Systems and methods for reducing interference between multiple infra-red depth cameras are described. In an embodiment, the system comprises multiple infra-red sources, each of which projects a structured light pattern into the environment. A controller is used to control the sources in order to reduce the interference caused by overlapping light patterns. Various methods are described including: cycling between the different sources, where the cycle used may be fixed or may change dynamically based on the scene detected using the cameras; setting the wavelength of each source so that overlapping patterns are at different wavelengths; moving source-camera pairs in independent motion patterns; and adjusting the shape of the projected light patterns to minimize overlap. These methods may also be combined in any way. In another embodiment, the system comprises a single source and a mirror system is used to cast the projected structured light pattern around the environment.Type: GrantFiled: January 31, 2011Date of Patent: January 26, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Stephen Edward Hodges, David Alexander Butler, Andrew Fitzgibbon, Pushmeet Kohli
-
Patent number: 9242171Abstract: Real-time camera tracking using depth maps is described. In an embodiment depth map frames are captured by a mobile depth camera at over 20 frames per second and used to dynamically update in real-time a set of registration parameters which specify how the mobile depth camera has moved. In examples the real-time camera tracking output is used for computer game applications and robotics. In an example, an iterative closest point process is used with projective data association and a point-to-plane error metric in order to compute the updated registration parameters. In an example, a graphics processing unit (GPU) implementation is used to optimize the error metric in real-time. In some embodiments, a dense 3D model of the mobile camera environment is used.Type: GrantFiled: February 23, 2013Date of Patent: January 26, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Richard Newcombe, Shahram Izadi, David Molyneaux, Otmar Hilliges, David Kim, Jamie Daniel Joseph Shotton, Pushmeet Kohli, Andrew Fitzgibbon, Stephen Edward Hodges, David Alexander Butler
-
Publication number: 20150346837Abstract: Aspects relate to detecting gestures that relate to a desired action, wherein the detected gestures are common across users and/or devices within a surface computing environment. Inferred intentions and goals based on context, history, affordances, and objects are employed to interpret gestures. Where there is uncertainty in intention of the gestures for a single device or across multiple devices, independent or coordinated communication of uncertainty or engagement of users through signaling and/or information gathering can occur.Type: ApplicationFiled: August 10, 2015Publication date: December 3, 2015Inventors: Meredith June Morris, Eric J. Horvitz, Andrew Daivd Wilson, F. David Jones, Stephen E. Hodges, Kenneth P. Hinckley, David Alexander Butler, Ian M. Sands, V. Kevin Russ, Hrvoje Benko, Shawn R. LeProwse, Shahram Izadi, William Ben Kunz
-
Patent number: 9134798Abstract: Aspects relate to detecting gestures that relate to a desired action, wherein the detected gestures are common across users and/or devices within a surface computing environment. Inferred intentions and goals based on context, history, affordances, and objects are employed to interpret gestures. Where there is uncertainty in intention of the gestures for a single device or across multiple devices, independent or coordinated communication of uncertainty or engagement of users through signaling and/or information gathering can occur.Type: GrantFiled: December 15, 2008Date of Patent: September 15, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Meredith June Morris, Eric J. Horvitz, Andrew David Wilson, F. David Jones, Stephen E. Hodges, Kenneth P. Hinckley, David Alexander Butler, Ian M. Sands, V. Kevin Russ, Hrvoje Benko, Shawn R. LeProwse, Shahram Izadi, William Ben Kunz
-
Publication number: 20150057784Abstract: 3D printing may be optimized by segmenting input jobs and/or combining parts of input jobs together. In an embodiment, a user-defined metric is received associated with each input job and this is used in scheduling input jobs to optimize latency and/or throughput of the 3D printing process, along with the printing envelope and other characteristics of the 3D printers used. In various embodiments, the scheduling may comprise dividing a 3D object into a number of parts and then scheduling these parts separately and/or combining 3D objects, or parts of 3D objects, from various input jobs to be printed at the same time on the same 3D printer. In various embodiments, the scheduling is repeated when a new input job is received and changes made during printing. In various embodiments, a user may submit an updated version of an input job which is already in the process of being printed.Type: ApplicationFiled: August 21, 2013Publication date: February 26, 2015Applicant: Microsoft CorporationInventors: David Alexander Butler, Nicolas Villar, James W. Scott, Stephen E. Hodges
-
Patent number: 8948184Abstract: A modular development platform is described which enables creation of reliable, compact, physically robust and power efficient embedded device prototypes. The platform consists of a base module which holds a processor and one or more peripheral modules each having an interface element. The base module and the peripheral modules may be electrically and/or physically connected together. The base module communicates with peripheral modules using packets of data with an addressing portion which identifies the peripheral module that is the intended recipient of the data packet.Type: GrantFiled: April 4, 2012Date of Patent: February 3, 2015Assignee: Microsoft CorporationInventors: Stephen E. Hodges, David Alexander Butler, Shahram Izadi, Chih-Chieh Han