Patents by Inventor Xavier Benavides Palos

Xavier Benavides Palos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135931
    Abstract: A method can include receiving audio input of speech, receiving visual input while receiving the audio input, generating a semantic description based on the visual input, and presenting a transcription of the speech based on the audio input and the semantic description.
    Type: Application
    Filed: October 19, 2022
    Publication date: April 25, 2024
    Inventor: Xavier Benavides Palos
  • Publication number: 20230305788
    Abstract: A system and method provides for the transfer of the execution of content, or the casting of content, from a first computing device to a second computing device. The casting may be performed in response to a detected lift event of the second computing device including the detection of movement of the second computing device from a stored state toward a position within a field of view of a user, and an identification event of the second computing device, including the detection or identification of the second computing device within a field of view of an image sensor of the first computing device, corresponding to a field of view of the user. Detection of both the lift event and the identification event may provide a relatively high level of assurance of user intent to cast the content from the first computing device to the second computing device.
    Type: Application
    Filed: March 8, 2022
    Publication date: September 28, 2023
    Inventors: Achin Kulshrestha, Robert Bowering, Maxwell Spear, Xavier Benavides Palos, Aveek Purohit
  • Publication number: 20230259199
    Abstract: A method including receiving an image from a sensor of a wearable device, rendering the image on a display of the wearable device, identifying a set of targets in the image, tracking a gaze direction associated with a user of the wearable device, rendering, on the displayed image, a gaze line based on the tracked gaze direction, identifying a subset of targets based on the set of targets in a region of the image based on the gaze line, triggering an action, and in response to the trigger, estimating a candidate target based on the subset of targets.
    Type: Application
    Filed: February 15, 2022
    Publication date: August 17, 2023
    Inventors: Mark Chang, Xavier Benavides Palos, Alexandr Virodov, Adarsh Prakash Murthy Kowdle, Kan Huang
  • Patent number: 11558711
    Abstract: Systems and methods are described for triggering, by a wearable computing device communicably coupled to an electronic device, a location-based request, receiving location data corresponding to a location of the electronic device, receiving altitude data corresponding to an altitude of the electronic device, detecting an initial orientation for the wearable computing device based on information from at least one sensor of the wearable computing device, generating, based on the initial orientation of the wearable computing device and location data of the electronic device, a real-world orientation for the wearable computing device, initiating tracking of the wearable computing device in response to receiving updated location data for the electronic device, and performing the location-based request based on the tracking.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: January 17, 2023
    Assignee: Google LLC
    Inventors: Xavier Benavides Palos, Steven Soon Leong Toh
  • Publication number: 20220295223
    Abstract: Systems and methods are described for triggering, by a wearable computing device communicably coupled to an electronic device, a location-based request, receiving location data corresponding to a location of the electronic device, receiving altitude data corresponding to an altitude of the electronic device, detecting an initial orientation for the wearable computing device based on information from at least one sensor of the wearable computing device, generating, based on the initial orientation of the wearable computing device and location data of the electronic device, a real-world orientation for the wearable computing device, initiating tracking of the wearable computing device in response to receiving updated location data for the electronic device, and performing the location-based request based on the tracking.
    Type: Application
    Filed: March 2, 2021
    Publication date: September 15, 2022
    Inventors: Xavier Benavides Palos, Steven Soon Leong Toh
  • Patent number: 11354011
    Abstract: A position of a cursor may be detected within an augmented reality (AR) display of a physical space captured by a camera. A snapping range of a snap-select function of the cursor may be dynamically changed in response to the position of the cursor. Accordingly, a user may place or hold a camera in a location to view an associated AR display, and may easily and precisely execute a snap-select function to select a desired AR object, even when the AR display includes AR objects that are of different sizes, or are different distances from the camera.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: June 7, 2022
    Assignee: GOOGLE LLC
    Inventor: Xavier Benavides Palos
  • Patent number: 11263819
    Abstract: A method includes: triggering rendering of an augmented reality (AR) environment having a viewer configured for generating views of the AR environment; triggering rendering, in the AR environment, of an object with an outside surface visualized using a mesh having a direction oriented away from the object; performing a first determination that the viewer is inside the object as a result of relative movement between the viewer and the object; and in response to the first determination, increasing a transparency of the outside surface, reversing the direction of at least part of the mesh, and triggering rendering of an inside surface of the object using the part of the mesh having the reversed direction, wherein the inside surface is illuminated by light from outside the object due to the increased transparency.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: March 1, 2022
    Assignee: Google LLC
    Inventors: Xavier Benavides Palos, Brett Barros
  • Publication number: 20210318790
    Abstract: A position of a cursor may be detected within an augmented reality (AR) display of a physical space captured by a camera. A snapping range of a snap-select function of the cursor may be dynamically changed in response to the position of the cursor. Accordingly, a user may place or hold a camera in a location to view an associated AR display, and may easily and precisely execute a snap-select function to select a desired AR object, even when the AR display includes AR objects that are of different sizes, or are different distances from the camera.
    Type: Application
    Filed: December 4, 2019
    Publication date: October 14, 2021
    Inventor: Xavier Benavides Palos
  • Patent number: 11100712
    Abstract: A method includes: receiving, in a first device, a relative description file for physical markers that are positioned at locations, the relative description file defining relative positions for each of the physical markers with regard to at least another one of the physical markers; initially localizing a position of the first device among the physical markers by visually capturing any first physical marker of the physical markers using an image sensor of the first device; and recognizing a second physical marker of the physical markers and a location of the second physical marker without a line of sight, the second physical marker recognized using the relative description file.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: August 24, 2021
    Assignee: Google LLC
    Inventors: Brett Barros, Xavier Benavides Palos
  • Patent number: 11043031
    Abstract: Systems and methods for inserting and transforming content are provided. For example, the inserted content may include augmented reality content that is inserted into a physical space or a representation of the physical space such as an image. An example system and method may include receiving an image and identifying a physical location associated with a display management entity within the image. The example system and method may also include retrieving content display parameters associated with the display management entity. Additionally, the example system and method may also include identifying content to display and displaying the content using the display parameters associated with the display management entity.
    Type: Grant
    Filed: October 22, 2018
    Date of Patent: June 22, 2021
    Assignee: GOOGLE LLC
    Inventors: Brett Barros, Xavier Benavides Palos
  • Patent number: 10922889
    Abstract: Systems and methods for drawing attention to points of interest within inserted content are provided. For example, the inserted content may include augmented reality content that is inserted into a physical space or a representation of the physical space such as an image. An example system and method may include receiving an image and identifying content to display over the image. The system and method may also include identifying a location within the image to display the content and identifying a point of interest of the content. Additionally, the example system and method may also include triggering display of the content overlaid on the image by identifying a portion of the content based on the point of interest, rendering the portion of the content using first shading parameters; and rendering the content other than the portion using second shading parameters.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: February 16, 2021
    Assignee: Google LLC
    Inventors: Xavier Benavides Palos, Brett Barros, Paul Bechard
  • Publication number: 20200320798
    Abstract: A method includes: triggering rendering of an augmented reality (AR) environment having a viewer configured for generating views of the AR environment; triggering rendering, in the AR environment, of an object with an outside surface visualized using a mesh having a direction oriented away from the object; performing a first determination that the viewer is inside the object as a result of relative movement between the viewer and the object; and in response to the first determination, increasing a transparency of the outside surface, reversing the direction of at least part of the mesh, and triggering rendering of an inside surface of the object using the part of the mesh having the reversed direction, wherein the inside surface is illuminated by light from outside the object due to the increased transparency.
    Type: Application
    Filed: June 23, 2020
    Publication date: October 8, 2020
    Inventors: Xavier Benavides Palos, Brett Barros
  • Publication number: 20200273250
    Abstract: A method includes: receiving, in a first device, a relative description file for physical markers that are positioned at locations, the relative description file defining relative positions for each of the physical markers with regard to at least another one of the physical markers; initially localizing a position of the first device among the physical markers by visually capturing any first physical marker of the physical markers using an image sensor of the first device; and recognizing a second physical marker of the physical markers and a location of the second physical marker without a line of sight, the second physical marker recognized using the relative description file.
    Type: Application
    Filed: May 13, 2020
    Publication date: August 27, 2020
    Inventors: Brett Barros, Xavier Benavides Palos
  • Publication number: 20200273251
    Abstract: Systems and methods for drawing attention to points of interest within inserted content are provided. For example, the inserted content may include augmented reality content that is inserted into a physical space or a representation of the physical space such as an image. An example system and method may include receiving an image and identifying content to display over the image. The system and method may also include identifying a location within the image to display the content and identifying a point of interest of the content. Additionally, the example system and method may also include triggering display of the content overlaid on the image by identifying a portion of the content based on the point of interest, rendering the portion of the content using first shading parameters; and rendering the content other than the portion using second shading parameters.
    Type: Application
    Filed: November 19, 2018
    Publication date: August 27, 2020
    Inventors: Xavier Benavides Palos, Brett Barros, Paul Bechard
  • Patent number: 10750162
    Abstract: Systems, devices, and apparatuses for a switchable augmented reality and virtual reality device are provided. An example device includes a handle, a device chamber, a viewport assembly, and a hinge assembly. The handle may be formed from folded sheets of material. The example device chamber is coupled to the handle and formed from folded sheets of material. The viewport assembly may be formed at least in part from folded sheets of material. The example hinge assembly pivotably couples the device chamber to the viewport assembly. For example, the hinge assembly may be operable to adjust the device between a virtual reality configuration and an augmented reality configuration. A display of a computing device held in the device chamber may be viewable through the viewport assembly in the virtual reality configuration and the display may be viewable without the viewport assembly in the augmented reality configuration.
    Type: Grant
    Filed: September 11, 2018
    Date of Patent: August 18, 2020
    Assignee: GOOGLE LLC
    Inventors: Erik Hubert Dolly Goossens, Joost Korngold, Xavier Benavides Palos
  • Patent number: 10726626
    Abstract: A method includes: triggering rendering of an augmented reality (AR) environment having a viewer configured for generating views of the AR environment; triggering rendering, in the AR environment, of an object with an outside surface visualized using a mesh having a direction oriented away from the object; performing a first determination that the viewer is inside the object as a result of relative movement between the viewer and the object; and in response to the first determination, increasing a transparency of the outside surface, reversing the direction of at least part of the mesh, and triggering rendering of an inside surface of the object using the part of the mesh having the reversed direction, wherein the inside surface is illuminated by light from outside the object due to the increased transparency.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: July 28, 2020
    Assignee: GOOGLE LLC
    Inventors: Xavier Benavides Palos, Brett Barros
  • Patent number: 10692289
    Abstract: A method includes: receiving, in a first device, a relative description file for physical markers that are positioned at locations, the relative description file defining relative positions for each of the physical markers with regard to at least another one of the physical markers; initially localizing a position of the first device among the physical markers by visually capturing any first physical marker of the physical markers using an image sensor of the first device; and recognizing a second physical marker of the physical markers and a location of the second physical marker without a line of sight, the second physical marker recognized using the relative description file.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: June 23, 2020
    Assignee: GOOGLE LLC
    Inventors: Brett Barros, Xavier Benavides Palos
  • Patent number: 10692293
    Abstract: A method includes: presenting, on a device, a view of at least part of an augmented reality (AR) environment, the AR environment including a first AR location corresponding to a first physical location in a physical environment; initiating a download to the device of first data representing a first AR object associated with the first AR location; assigning a size parameter to a first loading indicator for the first AR object based on a size of the first AR object; determining a spatial relationship between the view and the first physical location; and presenting the first loading indicator in the view, the first loading indicator having a size based on the assigned size parameter and being presented at a second AR location based on the determined spatial relationship.
    Type: Grant
    Filed: September 12, 2018
    Date of Patent: June 23, 2020
    Assignee: GOOGLE LLC
    Inventors: Xavier Benavides Palos, Rebecca Ackermann
  • Patent number: 10685485
    Abstract: A method includes: receiving, in a device, data defining an augmented reality (AR) environment, the data specifying a location of a first AR object in the AR environment; determining a physical inclination of the device; assigning a perception inclination to a map of the AR environment, the perception inclination based on the determined physical inclination; and triggering rendering of the map and an aspect of the AR environment on a display of the device, wherein the location of the first AR object is marked on the map and the map appears to have the perception inclination with regard to the display.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: June 16, 2020
    Assignee: GOOGLE LLC
    Inventors: Xavier Benavides Palos, Brett Barros
  • Publication number: 20200082627
    Abstract: A method includes: presenting, on a device, a view of at least part of an augmented reality (AR) environment, the AR environment including a first AR location corresponding to a first physical location in a physical environment; initiating a download to the device of first data representing a first AR object associated with the first AR location; assigning a size parameter to a first loading indicator for the first AR object based on a size of the first AR object; determining a spatial relationship between the view and the first physical location; and presenting the first loading indicator in the view, the first loading indicator having a size based on the assigned size parameter and being presented at a second AR location based on the determined spatial relationship.
    Type: Application
    Filed: September 12, 2018
    Publication date: March 12, 2020
    Inventors: Xavier Benavides Palos, Rebecca Ackermann