Patents by Inventor Oscar E. Murillo

Oscar E. Murillo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11543242
    Abstract: A method of and system for visualizing a sound source is disclosed. The method may include analyzing an audio signal received by a sound transducer to determine a positional direction of the sound source, determining whether the positional direction of the sound source falls outside a field of view of a user, and in response to determining that the positional direction of the sound source falls outside the field of view of the user, rendering on a display unit a visual representation of the sound source. The visual representation of the source is rendered on a virtual surface at a location within the field of view of the user, the location corresponding to at least one of a distance of the source from the user and a positional direction of the source with respect to the user.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: January 3, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Scott David Petill, Oscar E. Murillo, Rebecca Sundling Haruyama, Melissa Hellmund Vega
  • Patent number: 11494953
    Abstract: An augmented reality device comprising a camera, an augmented reality display, and a controller. The augmented reality display is configured to display the real-world environment and one or more virtual augmentations. The controller is configured to measure, via determination of hue, a color profile for a displayed portion of the real-world environment visible via the augmented reality display and imaged via the camera. A complementary palette of user interface colors is selected, each of such user interface colors having at least a predefined difference in hue relative to one or more colors in the color profile. An augmented reality feature is visually presented via the augmented reality display at a render location and with a render color from the complementary palette of user interface colors, the render color having at least the predefined difference in hue relative to a real-world environment color corresponding to the render location.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: November 8, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Oscar E. Murillo, William H. Robbins, Dylan Edmund Pierpont
  • Publication number: 20210364281
    Abstract: A method of and system for visualizing a sound source is disclosed. The method may include analyzing an audio signal received by a sound transducer to determine a positional direction of the sound source, determining whether the positional direction of the sound source falls outside a field of view of a user, and in response to determining that the positional direction of the sound source falls outside the field of view of the user, rendering on a display unit a visual representation of the sound source. The visual representation of the source is rendered on a virtual surface at a location within the field of view of the user, the location corresponding to at least one of a distance of the source from the user and a positional direction of the source with respect to the user.
    Type: Application
    Filed: May 20, 2020
    Publication date: November 25, 2021
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Scott David PETILL, Oscar E. MURILLO, Rebecca Sundling HARUYAMA, Melissa HELLMUND VEGA
  • Patent number: 10963293
    Abstract: Concepts and technologies are described herein for interacting with contextual and task-focused computing environments. Tasks associated with applications are described by task data. Tasks and/or batches of tasks relevant to activities occurring at a client are identified, and a UI for presenting the tasks is generated. The UIs can include tasks and workflows corresponding to batches of tasks. Workflows can be executed, interrupted, and resumed on demand. Interrupted workflows are stored with data indicating progress, contextual information, UI information, and other information. The workflow is stored and/or shared. When execution of the workflow is resumed, the same or a different UI can be provided, based upon the device used to resume execution of the workflow. Thus, multiple devices and users can access workflows in parallel to provide collaborative task execution.
    Type: Grant
    Filed: November 18, 2015
    Date of Patent: March 30, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Oscar E. Murillo, Benjamin William Vanik
  • Publication number: 20210004996
    Abstract: An augmented reality device comprising a camera, an augmented reality display, and a controller. The augmented reality display is configured to display the real-world environment and one or more virtual augmentations. The controller is configured to measure, via determination of hue, a color profile for a displayed portion of the real-world environment visible via the augmented reality display and imaged via the camera. A complementary palette of user interface colors is selected, each of such user interface colors having at least a predefined difference in hue relative to one or more colors in the color profile. An augmented reality feature is visually presented via the augmented reality display at a render location and with a render color from the complementary palette of user interface colors, the render color having at least the predefined difference in hue relative to a real-world environment color corresponding to the render location.
    Type: Application
    Filed: July 1, 2019
    Publication date: January 7, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Oscar E. MURILLO, William H. ROBBINS, Dylan Edmund PIERPONT
  • Patent number: 10599393
    Abstract: The subject disclosure relates to user input into a computer system, and a technology by which one or more users interact with a computer system via a combination of input modalities. When the input data of two or more input modalities are related, they are combined to interpret an intended meaning of the input. For example, speech when combined with one input gesture has one intended meaning, e.g., convert the speech to verbatim text for consumption by a program, while the exact speech when combined with a different input gesture has a different meaning, e.g., convert the speech to a command that controls the operation of that same program.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: March 24, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Oscar E. Murillo, Janet E. Galore, Jonathan C. Cluts, Colleen G. Estrada, Michael Koenig, Jack Creasey, Subha Bhattacharyay
  • Patent number: 10572803
    Abstract: The subject disclosure is directed towards a web service that maintains a set of models used to generate plans, such as vacation plans, in which the set of models includes models that are authored by crowd contributors via the service. The models include rules, constraints and/or equations, and may be text based and declarative such that any author can edit an existing model or combination of existing models into a new model. Users can access the models to generate a plan according to user parameters, view a presentation of that plan, and interact to provide new parameters to the model and/or with objects in the plan to modify the plan and view a presentation of the modified plan.
    Type: Grant
    Filed: October 30, 2015
    Date of Patent: February 25, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vijay Mital, Darryl E. Rubin, Oscar E. Murillo, Colleen G. Estrada
  • Patent number: 10573085
    Abstract: A mixed-reality display device comprises an input system, a display, and a graphics processor. The input system is configured to receive a parameter value, the parameter value being one of a plurality of values of a predetermined range receivable by the input system. The display is configured to display virtual image content that adds an augmentation to a real-world environment viewed by a user of the mixed-reality display device. The graphics processor is coupled operatively to the input system and to the display; it is configured to render the virtual image content so as to variably change the augmentation, to variably change a perceived realism of the real world environment in correlation to the parameter value.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: February 25, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alex Kipman, Purnima M. Rao, Rebecca Haruyama, Shih-Sang Carnaven Chiu, Stuart Mayhew, Oscar E. Murillo, Carlos Fernando Faria Costa
  • Patent number: 10408623
    Abstract: Techniques for creating breadcrumbs for a trail of activity are described. The trail of activity may be created by recording movement information based on inferred actions of walking, not walking, or changing floor levels. The movement information may be recorded with an accelerometer and a pressure sensor. A representation of a list of breadcrumbs may be visually displayed on a user interface of a mobile device, in a reverse order to retrace steps. In some implementations, a compass may additionally or alternatively be used to collect directional information relative to the earth's magnetic poles.
    Type: Grant
    Filed: June 12, 2009
    Date of Patent: September 10, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alice Jane Brush, James W. Scott, Galen C. Hunt, Raman K Sarin, Andrew W Jacobs, Barry C. Bond, Oscar E Murillo, Amy Karlson
  • Patent number: 10398972
    Abstract: Techniques for assigning a gesture dictionary in a gesture-based system to a user comprise capturing data representative of a user in a physical space. In a gesture-based system, gestures may control aspects of a computing environment or application, where the gestures may be derived from a user's position or movement in a physical space. In an example embodiment, the system may monitor a user's gestures and select a particular gesture dictionary in response to the manner in which the user performs the gestures. The gesture dictionary may be assigned in real time with respect to the capture of the data representative of a user's gesture. The system may generate calibration tests for assigning a gesture dictionary. The system may track the user during a set of short gesture calibration tests and assign the gesture dictionary based on a compilation of the data captured that represents the user's gestures.
    Type: Grant
    Filed: September 16, 2016
    Date of Patent: September 3, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Oscar E. Murillo, Andy D. Wilson, Alex A. Kipman, Janet E. Galore
  • Publication number: 20190197784
    Abstract: A mixed-reality display device comprises an input system, a display, and a graphics processor. The input system is configured to receive a parameter value, the parameter value being one of a plurality of values of a predetermined range receivable by the input system. The display is configured to display virtual image content that adds an augmentation to a real-world environment viewed by a user of the mixed-reality display device. The graphics processor is coupled operatively to the input system and to the display; it is configured to render the virtual image content so as to variably change the augmentation, to variably change a perceived realism of the real world environment in correlation to the parameter value.
    Type: Application
    Filed: November 19, 2018
    Publication date: June 27, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Alex Kipman, Purnima M. Rao, Rebecca Haruyama, Shih-Sang Carnaven Chiu, Stuart Mayhew, Oscar E. Murillo, Carlos Fernando Faria Costa
  • Publication number: 20190138271
    Abstract: The subject disclosure relates to user input into a computer system, and a technology by which one or more users interact with a computer system via a combination of input modalities. When the input data of two or more input modalities are related, they are combined to interpret an intended meaning of the input. For example, speech when combined with one input gesture has one intended meaning, e.g., convert the speech to verbatim text for consumption by a program, while the exact speech when combined with a different input gesture has a different meaning, e.g., convert the speech to a command that controls the operation of that same program.
    Type: Application
    Filed: August 1, 2018
    Publication date: May 9, 2019
    Inventors: Oscar E. Murillo, Janet E. Galore, Jonathan C. Cluts, Colleen G. Estrada, Michael Koenig, Jack Creasey, Subha Bhattacharyay
  • Patent number: 10169922
    Abstract: A mixed-reality display device comprises an input system, a display, and a graphics processor. The input system is configured to receive a parameter value, the parameter value being one of a plurality of values of a predetermined range receivable by the input system. The display is configured to display virtual image content that adds an augmentation to a real-world environment viewed by a user of the mixed-reality display device. The graphics processor is coupled operatively to the input system and to the display; it is configured to render the virtual image content so as to variably change the augmentation, to variably change a perceived realism of the real world environment in correlation to the parameter value.
    Type: Grant
    Filed: October 21, 2016
    Date of Patent: January 1, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alex Kipman, Purnima M. Rao, Rebecca Haruyama, Shih-Sang Carnaven Chiu, Stuart Mayhew, Oscar E. Murillo, Carlos Fernando Faria Costa
  • Patent number: 10116748
    Abstract: Various embodiments enable mobile devices, such as phones and the like, to integrate with an in-vehicle information/entertainment system to enable the user to control the in-vehicle information/entertainment system by way of their mobile phone. Users can leverage the functionality of their mobile phone to promote an in-vehicle experience which can be contextually tailored to the user's or the vehicle's context. Yet other embodiments can purvey an in-vehicle experience through a cloud based service.
    Type: Grant
    Filed: November 20, 2014
    Date of Patent: October 30, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jason Ryan Farmer, Mike Glass, Vikram Padmakar Bapat, Kristoffer S. Schultz, Oscar E. Murillo
  • Patent number: 10067740
    Abstract: The subject disclosure relates to user input into a computer system, and a technology by which one or more users interact with a computer system via a combination of input modalities. When the input data of two or more input modalities are related, they are combined to interpret an intended meaning of the input. For example, speech when combined with one input gesture has one intended meaning, e.g., convert the speech to verbatim text for consumption by a program, while the exact speech when combined with a different input gesture has a different meaning, e.g., convert the speech to a command that controls the operation of that same program.
    Type: Grant
    Filed: May 10, 2016
    Date of Patent: September 4, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Oscar E. Murillo, Janet E. Galore, Jonathan C. Cluts, Colleen G. Estrada, Michael Koenig, Jack Creasey, Subha Bhattacharyay
  • Publication number: 20180131643
    Abstract: A computing device is provided, which may include a display, an input device and a processor configured to execute an application program including an application user interface presented via the display, the application user interface including a session state of a current session of a user, and execute a bot client program configured to execute a dialog with a user, the bot client program including a conversation canvas presented via the display, wherein the bot client program is configured to receive a query in the dialog from the user via the input device and conversation canvas, determine that the query is directed to content related to the state of the application program, send a context request to the application program, receive context data from the application program, the context data being derived from the state of the application program, and determine a response to the query.
    Type: Application
    Filed: June 23, 2017
    Publication date: May 10, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Adina Magdalena TRUFINESCU, Fatima KARDAR, Matthew HIDINGER, Khuram SHAHID, Vishwac Sena KANNAN, Oscar E. MURILLO, Elan LEVY
  • Publication number: 20170236332
    Abstract: A mixed-reality display device comprises an input system, a display, and a graphics processor. The input system is configured to receive a parameter value, the parameter value being one of a plurality of values of a predetermined range receivable by the input system. The display is configured to display virtual image content that adds an augmentation to a real-world environment viewed by a user of the mixed-reality display device. The graphics processor is coupled operatively to the input system and to the display; it is configured to render the virtual image content so as to variably change the augmentation, to variably change a perceived realism of the real world environment in correlation to the parameter value.
    Type: Application
    Filed: October 21, 2016
    Publication date: August 17, 2017
    Inventors: Alex Kipman, Purnima M. Rao, Rebecca Haruyama, Shih-Sang Carnaven Chiu, Stuart Mayhew, Oscar E. Murillo, Carlos Fernando Faria Costa
  • Publication number: 20170144067
    Abstract: Techniques for assigning a gesture dictionary in a gesture-based system to a user comprise capturing data representative of a user in a physical space. In a gesture-based system, gestures may control aspects of a computing environment or application, where the gestures may be derived from a user's position or movement in a physical space. In an example embodiment, the system may monitor a user's gestures and select a particular gesture dictionary in response to the manner in which the user performs the gestures. The gesture dictionary may be assigned in real time with respect to the capture of the data representative of a user's gesture. The system may generate calibration tests for assigning a gesture dictionary. The system may track the user during a set of short gesture calibration tests and assign the gesture dictionary based on a compilation of the data captured that represents the user's gestures.
    Type: Application
    Filed: September 16, 2016
    Publication date: May 25, 2017
    Inventors: Oscar E. Murillo, Andy D. Wilson, Alex A. Kipman, Janet E. Galore
  • Publication number: 20160350071
    Abstract: The subject disclosure relates to user input into a computer system, and a technology by which one or more users interact with a computer system via a combination of input modalities. When the input data of two or more input modalities are related, they are combined to interpret an intended meaning of the input. For example, speech when combined with one input gesture has one intended meaning, e.g., convert the speech to verbatim text for consumption by a program, while the exact speech when combined with a different input gesture has a different meaning, e.g., convert the speech to a command that controls the operation of that same program.
    Type: Application
    Filed: May 10, 2016
    Publication date: December 1, 2016
    Inventors: Oscar E. Murillo, Janet E. Galore, Jonathan C. Cluts, Colleen G. Estrada, Michael Koenig, Jack Creasey, Subha Bhattacharyay
  • Patent number: D779513
    Type: Grant
    Filed: July 7, 2014
    Date of Patent: February 21, 2017
    Assignee: Microsoft Corporation
    Inventors: Oscar E. Murillo, Kristoffer S. Schultz, Cheryl Nicole Platz, Addison K. Linville, Kelly Rose McArthur, David A. Walker, Tyrone Samson, Vikram Bapat, John Henson, Jason Ryan Farmer, Craig Fox, Stefanie Lyn Tomko, Lisa Stifelman, Shane Jeremy Landry