Patents by Inventor Harold Anthony Martinez MOLINA

Harold Anthony Martinez MOLINA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10740982
    Abstract: Computing devices for automatic placement and arrangement of objects in computer-based 3D environments are disclosed herein. In one embodiment, a computing device is configured to provide, on a display, a user interface containing a work area having a template of a 3D environment and a gallery containing models of two-dimensional (2D) or 3D content items. The computing device can then detect, via the user interface, a user input selecting one of the models from the gallery to be inserted as an object into the template of the 3D environment. In response to detecting the user input, the computing device can render and surface on the display, a graphical representation of the 2D or 3D content item corresponding to the selected model at a location along a circular arc spaced apart from the default viewer position of a viewer of the 3D environment by a preset radial distance.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: August 11, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Colton Brett Marshall, Amy Scarfone, Harold Anthony Martinez Molina, Vidya Srinivasan, Andrew John Howe
  • Patent number: 10726634
    Abstract: The techniques described herein convert platform-specific scene files produced by multiple different design platforms into platform-agnostic scene files configured in an intermediate format. The intermediate format comprises a human-readable format that provides written descriptions of content in a three-dimensional scene template. The platform-agnostic scene files can be provided to any one of multiple different consumption platforms so the data in the intermediate format can be interpreted and a three-dimensional scene template can be rebuilt. Once rebuilt, the three-dimensional scene template provides a starting point for a user to create a three-dimensional scene for an experience (e.g., the user can continue to add content to create and customize a scene for a particular purpose).
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: July 28, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Harold Anthony Martinez Molina, Michael Lee Smith, Andrew John Howe, Vidya Srinivasan, Justin Chung-Ting Lam
  • Patent number: 10650118
    Abstract: The disclosed techniques enable virtual content displayed in an experience to be restricted and/or tailored based on a user identification. User information (e.g., login name, authentication credentials such as a password or biometric data, etc.) can be used to determine and/or authenticate an identification of a user that enters and/or consumes an experience via a head-mounted display device or another computing device connected to a head-mounted display device. The user identification can be used to determine which virtual content is displayed to the user as part of an experience. Consequently, different users that enter the same experience can be presented with different virtual content. This enables a creator of the experience to restrict the viewing of confidential and/or sensitive information. This also enables the creator of the experience to tailor or customize the virtual content that is displayed to each user that enters and/or consumes the experience.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: May 12, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vidya Srinivasan, Andrew John Howe, Harold Anthony Martinez Molina, Justin Chung-Ting Lam, Edward Boyle Averett
  • Patent number: 10650610
    Abstract: A platform configured to operate in different modes so that users can seamlessly switch between an authoring view and a consumption view while creating a three-dimensional scene is described herein. A first mode includes an authoring mode in which an authoring user can add and/or edit content displayed in a three-dimensional scene via a computing device. The second mode includes a consumption mode in which the authoring user can preview and/or share the content displayed in the three-dimensional scene via a head-mounted display device that is in some way connected to and/or in communication with the computing device. Consequently, the same platform (e.g., application) enables the authoring user to toggle between the two different modes while creating a three-dimensional scene that is part of an experience.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: May 12, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vidya Srinivasan, Andrew John Howe, Michael Lee Smith, Harold Anthony Martinez Molina
  • Patent number: 10545627
    Abstract: The disclosed techniques immediately download, to a head-mounted display device or to a device connected to a head-mounted display device, data used to render each of multiple three-dimensional scenes that are part of an experience. An experience includes related and/or linked content that can be accessed and/or displayed for a particular purpose. In various examples, the experience can initially be accessed using a computing device (e.g., a laptop, a smartphone, etc.). The immediate download can be implemented in response to a user switching consumption of the experience from a display of the computing device to a display of the head-mounted display device so three-dimensional scenes can be consumed in a three-dimensional immersive environment (e.g., a three-dimensional coordinate space displayed via the head-mounted display device). Data for individual ones of the three-dimensional scenes is instantiated (e.g.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: January 28, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Harold Anthony Martinez Molina, Michael Lee Smith, Andrew John Howe, Vidya Srinivasan, Aniket Handa
  • Publication number: 20200013236
    Abstract: A method, system, and computer program, for providing the virtual object in the virtual or semi-virtual environment, based on a characteristic associated with the user. In one example embodiment, the system comprises at least one computer processor, and a memory storing instructions that, when executed by the at least one computer processor, perform a set of operations comprising determining the characteristic associated with the user in the virtual or semi-virtual environment with respect to a predetermined reference location in the environment, and providing a virtual object based on the characteristic.
    Type: Application
    Filed: September 16, 2019
    Publication date: January 9, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Carlos G. PEREZ, Vidya SRINIVASAN, Colton B. MARSHALL, Aniket HANDA, Harold Anthony MARTINEZ MOLINA
  • Publication number: 20190355181
    Abstract: Techniques configured to enable multiple users to dynamically and concurrently edit a scene that is viewable in a three-dimensional immersive environment are described herein. The techniques use region locking so that content being edited by one user viewing and editing the scene in a three-dimensional immersive environment cannot be edited by another user concurrently viewing and editing the same scene in the three-dimensional immersive environment. Accordingly, a scene can be divided into multiple regions that can be locked to provide an element of protection against user interference that can result when two users are editing, or attempting to edit, the same content.
    Type: Application
    Filed: May 18, 2018
    Publication date: November 21, 2019
    Inventors: Vidya SRINIVASAN, Andrew John HOWE, Edward Boyle AVERETT, Harold Anthony MARTINEZ MOLINA
  • Publication number: 20190340830
    Abstract: Computing devices for content library projection in computer-based 3D environments are disclosed herein. In one embodiment, a computing device is configured to provide, on a display, a user interface containing a work area having a template of a 3D environment and a gallery containing models of two-dimensional (2D) or 3D content items. The computing device can then detect, via the user interface, a user input selecting the content library to be inserted as an object into the template of the 3D environment. In response to detecting the user input, the computing device can render and surface on the display, graphical representations of the 2D or 3D content items corresponding to the models in the selected content library along a circle having a center spaced apart from a default position of a viewer of the 3D environment by a preset distance.
    Type: Application
    Filed: May 4, 2018
    Publication date: November 7, 2019
    Inventors: Vidya Srinivasan, Colton Brett Marshall, Harold Anthony Martinez Molina, Aniket Handa, Amy Scarfone, Justin Chung-Ting Lam, Edward Boyle Averett
  • Publication number: 20190340829
    Abstract: Computing devices for automatic placement and arrangement of objects in computer-based 3D environments are disclosed herein. In one embodiment, a computing device is configured to provide, on a display, a user interface containing a work area having a template of a 3D environment and a gallery containing models of two-dimensional (2D) or 3D content items. The computing device can then detect, via the user interface, a user input selecting one of the models from the gallery to be inserted as an object into the template of the 3D environment. In response to detecting the user input, the computing device can render and surface on the display, a graphical representation of the 2D or 3D content item corresponding to the selected model at a location along a circular arc spaced apart from the default viewer position of a viewer of the 3D environment by a preset radial distance.
    Type: Application
    Filed: May 4, 2018
    Publication date: November 7, 2019
    Inventors: Colton Brett Marshall, Amy Scarfone, Harold Anthony Martinez Molina, Vidya Srinivasan, Andrew John Howe
  • Publication number: 20190340834
    Abstract: The techniques described herein convert platform-specific scene files produced by multiple different design platforms into platform-agnostic scene files configured in an intermediate format. The intermediate format comprises a human-readable format that provides written descriptions of content in a three-dimensional scene template. The platform-agnostic scene files can be provided to any one of multiple different consumption platforms so the data in the intermediate format can be interpreted and a three-dimensional scene template can be rebuilt. Once rebuilt, the three-dimensional scene template provides a starting point for a user to create a three-dimensional scene for an experience (e.g., the user can continue to add content to create and customize a scene for a particular purpose).
    Type: Application
    Filed: May 4, 2018
    Publication date: November 7, 2019
    Inventors: Harold Anthony MARTINEZ MOLINA, Michael Lee SMITH, Andrew John HOWE, Vidya SRINIVASAN, Justin Chung-Ting LAM
  • Publication number: 20190340333
    Abstract: The disclosed techniques enable virtual content displayed in an experience to be restricted and/or tailored based on a user identification. User information (e.g., login name, authentication credentials such as a password or biometric data, etc.) can be used to determine and/or authenticate an identification of a user that enters and/or consumes an experience via a head-mounted display device or another computing device connected to a head-mounted display device. The user identification can be used to determine which virtual content is displayed to the user as part of an experience. Consequently, different users that enter the same experience can be presented with different virtual content. This enables a creator of the experience to restrict the viewing of confidential and/or sensitive information. This also enables the creator of the experience to tailor or customize the virtual content that is displayed to each user that enters and/or consumes the experience.
    Type: Application
    Filed: May 4, 2018
    Publication date: November 7, 2019
    Inventors: Vidya SRINIVASAN, Andrew John HOWE, Harold Anthony MARTINEZ MOLINA, Justin Chung-Ting LAM, Edward Boyle AVERETT
  • Publication number: 20190339838
    Abstract: The disclosed techniques immediately download, to a head-mounted display device or to a device connected to a head-mounted display device, data used to render each of multiple three-dimensional scenes that are part of an experience. An experience includes related and/or linked content that can be accessed and/or displayed for a particular purpose. In various examples, the experience can initially be accessed using a computing device (e.g., a laptop, a smartphone, etc.). The immediate download can be implemented in response to a user switching consumption of the experience from a display of the computing device to a display of the head-mounted display device so three-dimensional scenes can be consumed in a three-dimensional immersive environment (e.g., a three-dimensional coordinate space displayed via the head-mounted display device). Data for individual ones of the three-dimensional scenes is instantiated (e.g.
    Type: Application
    Filed: May 4, 2018
    Publication date: November 7, 2019
    Inventors: Harold Anthony MARTINEZ MOLINA, Michael Lee SMITH, Andrew John HOWE, Vidya SRINIVASAN, Aniket HANDA
  • Publication number: 20190340832
    Abstract: A platform configured to operate in different modes so that users can seamlessly switch between an authoring view and a consumption view while creating a three-dimensional scene is described herein. A first mode includes an authoring mode in which an authoring user can add and/or edit content displayed in a three-dimensional scene via a computing device. The second mode includes a consumption mode in which the authoring user can preview and/or share the content displayed in the three-dimensional scene via a head-mounted display device that is in some way connected to and/or in communication with the computing device. Consequently, the same platform (e.g., application) enables the authoring user to toggle between the two different modes while creating a three-dimensional scene that is part of an experience.
    Type: Application
    Filed: May 4, 2018
    Publication date: November 7, 2019
    Inventors: Vidya SRINIVASAN, Andrew John HOWE, Michael Lee SMITH, Harold Anthony MARTINEZ MOLINA
  • Patent number: 10453273
    Abstract: A method, system, and computer program, for providing the virtual object in the virtual or semi-virtual environment, based on a characteristic associated with the user. In one example embodiment, the system comprises at least one computer processor, and a memory storing instructions that, when executed by the at least one computer processor, perform a set of operations comprising determining the characteristic associated with the user in the virtual or semi-virtual environment with respect to a predetermined reference location in the environment, and providing a virtual object based on the characteristic.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: October 22, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Carlos G. Perez, Vidya Srinivasan, Colton B. Marshall, Aniket Handa, Harold Anthony Martinez Molina
  • Patent number: 10388077
    Abstract: Aspects of the present disclosure relate to three-dimensional (3D) environment authoring and generation. In an example, a 3D environment may be authored using one or more models, wherein two-dimensional (2D) representations of the models may be manipulated using an authoring application. Models may comprise anchor points, which may be used to stitch the models together when rendering the 3D environment. In another example, a model may comprise one or more content points, which may be used to position content within the 3D environment. An environment data file may be generated based on the one or more models and content associated with content points, thereby creating a file that may be distributed to other computing devices. A viewer application may be used to generate the 3D environment based on the environment data file. Accordingly, the viewer application may stitch the models and populate the 3D environment with content.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: August 20, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vidya Srinivasan, Carlos G. Perez, Colton Brett Marshall, Aniket Handa, Harold Anthony Martinez Molina
  • Publication number: 20180308290
    Abstract: A method, system, and computer program, for providing the virtual object in the virtual or semi-virtual environment, based on a characteristic associated with the user. In one example embodiment, the system comprises at least one computer processor, and a memory storing instructions that, when executed by the at least one computer processor, perform a set of operations comprising determining the characteristic associated with the user in the virtual or semi-virtual environment with respect to a predetermined reference location in the environment, and providing a virtual object based on the characteristic.
    Type: Application
    Filed: June 28, 2017
    Publication date: October 25, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Carlos G. PEREZ, Vidya SRINIVASAN, Colton B. MARSHALL, Aniket HANDA, Harold Anthony MARTINEZ MOLINA
  • Publication number: 20180308274
    Abstract: Methods and systems for controlling a view of a virtual camera in a virtual world. A view of user viewing a virtual world may be controlled or changed while accounting for a user's head position. For example, a virtual camera may be wrapped in a container such that rotation of the container causes rotation of the virtual camera relative to a global coordinate system. Based on a position of a head-mounted display, an initial virtual camera rotation angle relative to a global coordinate system of the virtual world may be identified. An indication to change to view to particular direction may be received. A desired rotation angle relative to the global coordinate system for a view to correspond to the particular direction is then determined. The container is then rotated by a rotation value based at least on both the desired rotation angle and the initial virtual camera rotation angle.
    Type: Application
    Filed: June 28, 2017
    Publication date: October 25, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Harold Anthony Martinez MOLINA, Vidya SRINIVASAN, Carlos G. PEREZ, Aniket HANDA, Colton Brett MARSHALL
  • Publication number: 20180308289
    Abstract: Aspects of the present disclosure relate to three-dimensional (3D) environment authoring and generation. In an example, a 3D environment may be authored using one or more models, wherein two-dimensional (2D) representations of the models may be manipulated using an authoring application. Models may comprise anchor points, which may be used to stitch the models together when rendering the 3D environment. In another example, a model may comprise one or more content points, which may be used to position content within the 3D environment. An environment data file may be generated based on the one or more models and content associated with content points, thereby creating a file that may be distributed to other computing devices. A viewer application may be used to generate the 3D environment based on the environment data file. Accordingly, the viewer application may stitch the models and populate the 3D environment with content.
    Type: Application
    Filed: June 28, 2017
    Publication date: October 25, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Vidya SRINIVASAN, Carlos G. PEREZ, Colton Brett MARSHALL, Aniket HANDA, Harold Anthony Martinez MOLINA