Patents by Inventor Anthony James Ambrus
Anthony James Ambrus has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11960790Abstract: A computer implemented method includes detecting user interaction with mixed reality displayed content in a mixed reality system. User focus is determined as a function of the user interaction based on the user interaction using a spatial intent model. A length of time for extending voice engagement with the mixed reality system is modified based on the determined user focus. Detecting user interaction with the displayed content may include tracking eye movements to determine objects in the displayed content at which the user is looking and determining a context of a user dialog during the voice engagement.Type: GrantFiled: May 27, 2021Date of Patent: April 16, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Austin S. Lee, Jonathan Kyle Palmer, Anthony James Ambrus, Mathew J. Lamb, Sheng Kai Tang, Sophie Stellmach
-
Patent number: 11824821Abstract: Systems and methods are provided for facilitating the presentation of expressive intent and other status information with messaging and other communication applications. The expressive intent is based on expressive effect data associated with the message recipients and/or message senders. The expressive intent can be conveyed through avatars and modified message content. The avatars convey gestures, emotions and other status information and the presentation of the avatars can be reactive to detected state information of the message recipient(s), message sender(s) and/or corresponding messaging device(s).Type: GrantFiled: November 1, 2022Date of Patent: November 21, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Austin Seungmin Lee, Amy Mun Hong, Keiichi Matsuda, Anthony James Ambrus, Mathew Julian Lamb, Kenneth Mitchell Jakubzak
-
Patent number: 11656679Abstract: Examples are disclosed that relate to image reprojection. One example provides a method, comprising receiving a first rendered image comprising content associated with a viewer reference frame, receiving a second rendered image comprising content associated with a manipulator reference frame, and reprojecting the first rendered image based on a head pose of a user to thereby produce a first reprojected image. The method further comprises reprojecting the second rendered image based on the head pose of the user and a pose of the manipulator to thereby produce a second reprojected image, and outputting the first reprojected image and the second reprojected image for display as a composited image.Type: GrantFiled: August 27, 2020Date of Patent: May 23, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Benjamin Markus Thaut, Anthony James Ambrus
-
Patent number: 11630509Abstract: This disclosure relates to displaying a user interface for a computing device based upon a user intent determined via a spatial intent model. One example provides a computing device comprising a see-through display, a logic subsystem, and a storage subsystem. The storage subsystem comprises instructions executable by the logic machine to receive, via an eye-tracking sensor, eye tracking samples each corresponding to a gaze direction of a user, based at least on the eye tracking samples, determine a time-dependent attention value for a location in a field of view of the see-through display, based at least on the time-dependent attention value for the location, determine an intent of the user to interact with a user interface associated with the location that is at least partially hidden from a current view, and in response to determining the intent, display via the see-through display the user interface.Type: GrantFiled: December 11, 2020Date of Patent: April 18, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Austin S. Lee, Anthony James Ambrus, Sheng Kai Tang, Keiichi Matsuda, Aleksandar Josic
-
Publication number: 20230111597Abstract: Systems and methods are provided for facilitating the presentation of expressive intent and other status information with messaging and other communication applications. The expressive intent is based on expressive effect data associated with the message recipients and/or message senders. The expressive intent can be conveyed through avatars and modified message content. The avatars convey gestures, emotions and other status information and the presentation of the avatars can be reactive to detected state information of the message recipient(s), message sender(s) and/or corresponding messaging device(s).Type: ApplicationFiled: November 1, 2022Publication date: April 13, 2023Inventors: Austin Seungmin LEE, Amy Mun HONG, Keiichi MATSUDA, Anthony James AMBRUS, Mathew Julian LAMB, Kenneth Mitchell JAKUBZAK
-
Publication number: 20220382510Abstract: A computer implemented method includes detecting user interaction with mixed reality displayed content in a mixed reality system. User focus is determined as a function of the user interaction based on the user interaction using a spatial intent model. A length of time for extending voice engagement with the mixed reality system is modified based on the determined user focus. Detecting user interaction with the displayed content may include tracking eye movements to determine objects in the displayed content at which the user is looking and determining a context of a user dialog during the voice engagement.Type: ApplicationFiled: May 27, 2021Publication date: December 1, 2022Inventors: Austin S. LEE, Jonathan Kyle PALMER, Anthony James AMBRUS, Mathew J. LAMB, Sheng Kai TANG, Sophie STELLMACH
-
Patent number: 11509612Abstract: Systems and methods are provided for facilitating the presentation of expressive intent and other status information with messaging and other communication applications. The expressive intent is based on expressive effect data associated with the message recipients and/or message senders. The expressive intent can be conveyed through avatars and modified message content. The avatars convey gestures, emotions and other status information and the presentation of the avatars can be reactive to detected state information of the message recipient(s), message sender(s) and/or corresponding messaging device(s).Type: GrantFiled: December 15, 2020Date of Patent: November 22, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Austin Seungmin Lee, Amy Mun Hong, Keiichi Matsuda, Anthony James Ambrus, Mathew Julian Lamb, Kenneth Mitchell Jakubzak
-
Patent number: 11487467Abstract: Rapid writing to and reading from even very large amounts of data, especially where the data evolves more slowly over time. For each of a sequence of commits of the data, the data is represented by identifying pages of the data that have changed since a prior commit in the sequence of commits. A sparce file is formulate for the commit, and contains each of identified pages, and for each identified page a mapping of the identified to a page address of the identified page in the address range. The sparce file is then stored as associated with the corresponding commit. Thus, an ordered sequence of sparce files can be created and layered on top of a base file that represents the entire page address range. The sparce files may be quite small as there may be relatively few pages (or perhaps even no pages) that changed since the prior commit in the sequence of commits. Reads occur by creating a sparce in-memory object, and checking for each page at each sparce.Type: GrantFiled: May 28, 2021Date of Patent: November 1, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Anthony James Ambrus, Logan James Buesching
-
Patent number: 11429186Abstract: One example provides a computing device comprising instructions executable to receive information regarding one or more entities in the scene, to receive eye tracking a plurality of eye tracking samples, each eye tracking sample corresponding to a gaze direction of a user and, based at least on the eye tracking samples, determine a time-dependent attention value for each entity of the one or more entities at different locations in a use environment, the time-dependent attention value determined using a leaky integrator. The instructions are further executable to receive a user input indicating an intent to perform a location-dependent action, associate the user input to with a selected entity based at least upon the time-dependent attention value for each entity, and perform the location-dependent action based at least upon a location of the selected entity.Type: GrantFiled: November 18, 2020Date of Patent: August 30, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Austin S. Lee, Mathew J. Lamb, Anthony James Ambrus, Amy Mun Hong, Jonathan Palmer, Sophie Stellmach
-
Publication number: 20220191157Abstract: Systems and methods are provided for facilitating the presentation of expressive intent and other status information with messaging and other communication applications. The expressive intent is based on expressive effect data associated with the message recipients and/or message senders. The expressive intent can be conveyed through avatars and modified message content. The avatars convey gestures, emotions and other status information and the presentation of the avatars can be reactive to detected state information of the message recipient(s), message sender(s) and/or corresponding messaging device(s).Type: ApplicationFiled: December 15, 2020Publication date: June 16, 2022Inventors: Austin Seungmin LEE, Amy Mun HONG, Keiichi MATSUDA, Anthony James AMBRUS, Mathew Julian LAMB, Kenneth Mitchell JAKUBZAK
-
Publication number: 20220187907Abstract: This disclosure relates to displaying a user interface for a computing device based upon a user intent determined via a spatial intent model. One example provides a computing device comprising a see-through display, a logic subsystem, and a storage subsystem. The storage subsystem comprises instructions executable by the logic machine to receive, via an eye-tracking sensor, eye tracking samples each corresponding to a gaze direction of a user, based at least on the eye tracking samples, determine a time-dependent attention value for a location in a field of view of the see-through display, based at least on the time-dependent attention value for the location, determine an intent of the user to interact with a user interface associated with the location that is at least partially hidden from a current view, and in response to determining the intent, display via the see-through display the user interface.Type: ApplicationFiled: December 11, 2020Publication date: June 16, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Austin S. LEE, Anthony James AMBRUS, Sheng Kai TANG, Keiichi MATSUDA, Aleksandar JOSIC
-
Publication number: 20220155857Abstract: One example provides a computing device comprising instructions executable to receive information regarding one or more entities in the scene, to receive eye tracking a plurality of eye tracking samples, each eye tracking sample corresponding to a gaze direction of a user and, based at least on the eye tracking samples, determine a time-dependent attention value for each entity of the one or more entities at different locations in a use environment, the time-dependent attention value determined using a leaky integrator. The instructions are further executable to receive a user input indicating an intent to perform a location-dependent action, associate the user input to with a selected entity based at least upon the time-dependent attention value for each entity, and perform the location-dependent action based at least upon a location of the selected entity.Type: ApplicationFiled: November 18, 2020Publication date: May 19, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Austin S. LEE, Mathew J. LAMB, Anthony James AMBRUS, Amy Mun HONG, Jonathan PALMER, Sophie STELLMACH
-
Patent number: 11270672Abstract: Examples are disclosed herein relating to displaying a virtual assistant. One example provides an augmented reality display device comprising a see-through display, a logic subsystem, and a storage subsystem storing instructions executable by the logic subsystem to display via the see-through display a virtual assistant associated with a location in a real-world environment, detect a change in a field of view of the see-through display, and when the virtual assistant is out of the field of view of the see-through display after the change in the field of view, display the virtual assistant in a virtual window on the see-through display.Type: GrantFiled: November 2, 2020Date of Patent: March 8, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Austin S. Lee, Anthony James Ambrus, Mathew Julian Lamb, Sophie Stellmach, Keiichi Matsuda
-
Publication number: 20220066546Abstract: Examples are disclosed that relate to image reprojection. One example provides a method, comprising receiving a first rendered image comprising content associated with a viewer reference frame, receiving a second rendered image comprising content associated with a manipulator reference frame, and reprojecting the first rendered image based on a head pose of a user to thereby produce a first reprojected image. The method further comprises reprojecting the second rendered image based on the head pose of the user and a pose of the manipulator to thereby produce a second reprojected image, and outputting the first reprojected image and the second reprojected image for display as a composited image.Type: ApplicationFiled: August 27, 2020Publication date: March 3, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Benjamin Markus THAUT, Anthony James AMBRUS
-
Patent number: 10769856Abstract: In various embodiments, methods and systems for rendering augmented reality objects based on user heights are provided. Height data of a user of an augmented reality device can be determined. The height data relates to a viewing perspective from an eye level of the user. Placement data for a first augmented reality object is generated based on the user height data. The first augmented reality object is rendered based on the user height data, and a second augmented reality object is excluded from rendering based on the user height data.Type: GrantFiled: November 19, 2018Date of Patent: September 8, 2020Assignee: Microsoft Technology Licensing, LLCInventors: James Gerard Dack, Jeffrey Alan Kohler, Shawn Crispin Wright, Anthony James Ambrus
-
Patent number: 10620717Abstract: In embodiments of a camera-based input device, the input device includes an inertial measurement unit that collects motion data associated with velocity and acceleration of the input device in an environment, such as in three-dimensional (3D) space. The input device also includes at least two visual light cameras that capture images of the environment. A positioning application is implemented to receive the motion data from the inertial measurement unit, and receive the images of the environment from the at least two visual light cameras. The positioning application can then determine positions of the input device based on the motion data and the images correlated with a map of the environment, and track a motion of the input device in the environment based on the determined positions of the input device.Type: GrantFiled: June 30, 2016Date of Patent: April 14, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Daniel Joseph McCulloch, Nicholas Gervase Fajt, Adam G. Poulos, Christopher Douglas Edmonds, Lev Cherkashin, Brent Charles Allen, Constantin Dulu, Muhammad Jabir Kapasi, Michael Grabner, Michael Edward Samples, Cecilia Bong, Miguel Angel Susffalich, Varun Ramesh Mani, Anthony James Ambrus, Arthur C. Tomlin, James Gerard Dack, Jeffrey Alan Kohler, Eric S. Rehmeyer, Edward D. Parker
-
Publication number: 20190088029Abstract: In various embodiments, methods and systems for rendering augmented reality objects based on user heights are provided. Height data of a user of an augmented reality device can be determined. The height data relates to a viewing perspective from an eye level of the user. Placement data for a first augmented reality object is generated based on the user height data. The first augmented reality object is rendered based on the user height data, and a second augmented reality object is excluded from rendering based on the user height data.Type: ApplicationFiled: November 19, 2018Publication date: March 21, 2019Applicant: Microsoft Technology Licensing, LLCInventors: James Gerard Dack, Jeffrey Alan Kohler, Shawn Crispin Wright, Anthony James Ambrus
-
Patent number: 10203781Abstract: In various embodiments, methods and systems for implementing integrated free space and surface inputs are provided. An integrated free space and surface input system includes a mixed-input pointing device for interacting and controlling interface objects using free space inputs and surface inputs, trigger buttons, pressure sensors, and haptic feedback associated with the mixed-input pointing device. Free space movement data and surface movement data are tracked and determined for the mixed-input pointing device. An interface input is detected for the mixed-input pointing device transitioning from a first input to a second input, such as, from a free space input to a surface input or from the surface input to the free space input. The interface input is processed based on accessing the free space movement data and the surface movement data. An output for the interface input is communicated from the mixed-input pointing device to interact and control an interface.Type: GrantFiled: June 24, 2016Date of Patent: February 12, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Anatolie Gavriliuc, Shawn Crispin Wright, Jeffrey Alan Kohler, Quentin Simon Charles Miller, Scott Francis Fullam, Sergio Paolantonio, Michael Edward Samples, Anthony James Ambrus
-
Patent number: 10134190Abstract: In various embodiments, methods and systems for rendering augmented reality objects based on user heights are provided. Height data of a user of an augmented reality device can be determined. The height data relates to a viewing perspective from an eye level of the user. Placement data for an augmented reality object is generated based on a constraint configuration that is associated with the augmented reality object for user-height-based rendering. The constraint configuration includes rules that support generating placement data for rendering augmented reality objects based on the user height data. The augmented reality object is rendered based on the placement data. Augmented reality objects are rendered in a real world scene, such that, the augmented reality object is personalized for each user during an augmented reality experience. In shared experiences, with multiple users viewing a single augmented reality object, the object can be rendered based on a particular user's height.Type: GrantFiled: June 14, 2016Date of Patent: November 20, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: James Gerard Dack, Jeffrey Alan Kohler, Shawn Crispin Wright, Anthony James Ambrus
-
Patent number: 9928648Abstract: In various embodiments, computerized methods and systems for identifying object paths to navigate objects in scene-aware device environments are provided. An object path identification mechanism supports identifying object paths. In operation, a guide path for navigating an object from the start point to the end point in a scene-aware device environment is identified. A guide path can be predefined or recorded in real time. A visibility check, such as a look-ahead operation, is performed based on the guide path. Based on performing the visibility check, a path segment to advance the object from the start point towards the end point is determined. The path segment can be optionally modified or refined based on several factors. The object is caused to advance along the path segment. Iteratively performing visibility checks and traverse actions moves the object from the start point to the end point. The path segments define the object path.Type: GrantFiled: November 9, 2015Date of Patent: March 27, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Anthony James Ambrus, Jeffrey Kohler