Patents by Inventor Robert Tartz

Robert Tartz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220130125
    Abstract: Systems, apparatuses (or devices), methods, and computer-readable media are provided for generating virtual content. For example, a device (e.g., an extended reality device) can obtain an image of a scene of a real-world environment, wherein the real-world environment is viewable through a display of the extended reality device as virtual content is displayed by the display. The device can detect at least a part of a physical hand of a user in the image. The device can generate a virtual keyboard based on detecting at least the part of the physical hand. The device can determine a position for the virtual keyboard on the display of the extended reality device relative to at least the part of the physical hand. The device can display the virtual keyboard at the position on the display.
    Type: Application
    Filed: January 5, 2022
    Publication date: April 28, 2022
    Inventors: Scott BEITH, Jonathan KIES, Robert TARTZ, Ananthapadmanabhan Arasanipalai KANDHADAI
  • Patent number: 11270515
    Abstract: Systems, apparatuses (or devices), methods, and computer-readable media are provided for generating virtual content. For example, a device (e.g., an extended reality device) can obtain an image of a scene of a real-world environment, wherein the real-world environment is viewable through a display of the extended reality device as virtual content is displayed by the display. The device can detect at least a part of a physical hand of a user in the image. The device can generate a virtual keyboard based on detecting at least the part of the physical hand. The device can determine a position for the virtual keyboard on the display of the extended reality device relative to at least the part of the physical hand. The device can display the virtual keyboard at the position on the display.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: March 8, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Scott Beith, Jonathan Kies, Robert Tartz, Ananthapadmanabhan Arasanipalai Kandhadai
  • Patent number: 11238664
    Abstract: Techniques and systems are provided for providing recommendations for extended reality systems. In some examples, a system determines one or more environmental features associated with a real-world environment of an extended reality system. The system determines one or more user features associated with a user of the extended reality system. The system also outputs, based on the one or more environmental features and the one or more user features, a notification associated with at least one application supported by the extended reality system.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: February 1, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Mehrad Tavakoli, Robert Tartz, Scott Beith, Gerhard Reitmayr
  • Patent number: 11231827
    Abstract: Techniques are provided for integrating mobile device and extended reality experiences. Extended reality technologies can include virtual reality (VR), augmented reality (AR), mixed reality (MR), etc. In some examples, a synthetic (or virtual) representation of a device (e.g., a mobile device, such as a mobile phone or other type of device) can be generated and displayed along with VR content being displayed by a VR device (e.g., a head-mounted display (HMD)). In another example, content from the device (e.g., visual content being displayed and/or audio content being played by the device) can be output along with VR content being displayed by the VR device. In another example, one or more images captured by a camera of the device and/or audio obtained by a microphone of the device can be obtained from the device by a virtual reality device and can be output by the virtual reality device.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: January 25, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Douglas Brems, Robert Tartz, Robyn Teresa Oliver
  • Publication number: 20220012920
    Abstract: Systems, methods, and non-transitory media are provided for generating virtual private spaces for extended reality (XR) experiences. An example method can include initiating a virtual session for presenting virtual content and identifying, for the virtual session, a portion of a physical space for use as a virtual private space for presenting at least a portion of the virtual content. The method can include outputting boundary information defining a boundary of the virtual private space, and generate at least the portion of the virtual content for the virtual private space. At least the portion of the virtual content is viewable in the virtual private space by one or more authorized users of the virtual session and is not viewable by one or more unauthorized users.
    Type: Application
    Filed: July 7, 2020
    Publication date: January 13, 2022
    Inventors: Scott BEITH, Robert TARTZ, Ananthapadmanabhan Arasanipalai KANDHADAI, Gerhard REITMAYR, Mehrad TAVAKOLI
  • Publication number: 20220014839
    Abstract: Methods, systems, computer-readable media, and apparatuses for audio signal processing are presented. Some configurations include determining that first audio activity in at least one microphone signal is voice activity; determining whether the voice activity is voice activity of a participant in an application session active on a device; based at least on a result of the determining whether the voice activity is voice activity of a participant in the application session, generating an antinoise signal to cancel the first audio activity; and by a loudspeaker, producing an acoustic signal that is based on the antinoise signal. Applications relating to shared virtual spaces are described.
    Type: Application
    Filed: July 9, 2020
    Publication date: January 13, 2022
    Inventors: Robert TARTZ, Scott BEITH, Mehrad TAVAKOLI, Gerhard REITMAYR
  • Publication number: 20220006973
    Abstract: A device may be configured to determining display properties for virtual content in an environment with a plurality of physical participants by capturing an image of the environment, analyzing the captured image to identify at least one object in the environment, determining a parameter for the identified object, and determining a display property of a digital representation of virtual content based on the determined parameter. Embodiments may include negotiating display properties with other devices to generate coordinated display properties, and rendering the digital representation of the virtual content so that the remote participant appears to be in the same fixed position to all co-located participants and sized consistent with the co-located participants.
    Type: Application
    Filed: September 10, 2021
    Publication date: January 6, 2022
    Inventors: Jonathan KIES, Robert TARTZ, Daniel James GUEST
  • Publication number: 20220001893
    Abstract: Techniques described herein include detecting a degree of motion sickness experienced by a user within a vehicle. A suitable combination of physiological data (heart rate, heart rate variability parameters, blood volume pulse, oxygen values, respiration values, galvanic skin response, skin conductance values, and the like), eye gaze data (e.g., images of the user), vehicle motion data (e.g., accelerometer, gyroscope data indicative of vehicle oscillations) may be utilized to identify the degree of motion sickness experienced by the user. One or more autonomous actions may be performed to prevent an escalation in the degree of motion sickness experienced by the user or to ameliorate the degree of motion sickness currently experienced by the user.
    Type: Application
    Filed: July 2, 2020
    Publication date: January 6, 2022
    Inventor: Robert TARTZ
  • Publication number: 20210382550
    Abstract: Various embodiments include processing devices and methods for managing multisensor inputs on a mobile computing device. Various embodiments may include receiving multiple inputs from multiple touch sensors, identifying types of user interactions with the touch sensors from the multiple inputs, identifying sensor input data in a multisensor input data structure corresponding with the types of user interactions, and determining whether the multiple inputs combine as a multisensor input in an entry in the multisensor input data structure having the sensor input data related to a multisensor input response. Various embodiments may include detecting a trigger for a multisensor input mode, entering the multisensor input mode in response to detecting the trigger, and enabling processing of an input from a touch sensor.
    Type: Application
    Filed: August 23, 2021
    Publication date: December 9, 2021
    Inventors: Robyn Teresa OLIVER, Robert Tartz, Douglas Brems, Suhail Jalil, Jonathan Kies
  • Patent number: 11159766
    Abstract: A device may be configured to determining display properties for virtual content in an environment with a plurality of physical participants by capturing an image of the environment, analyzing the captured image to identify at least one object in the environment, determining a parameter for the identified object, and determining a display property of a digital representation of virtual content based on the determined parameter. Embodiments may include negotiating display properties with other devices to generate coordinated display properties, and rendering the digital representation of the virtual content so that the remote participant appears to be in the same fixed position to all co-located participants and sized consistent with the co-located participants.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: October 26, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Jonathan Kies, Robert Tartz, Daniel James Guest
  • Patent number: 11127380
    Abstract: A head-mounted device may include a processor configured to receive information from a sensor that is indicative of a position of the head-mounted device relative to a reference point on a face of a user; and adjust a rendering of an item of virtual content based on the position or a change in the position of the device relative to the face. The sensor may be distance sensor, and the processor may be configured to adjust the rendering of the item of virtual content based a measured distance or change of distance between the head-mounted device and the point of reference on the user's face. The point of reference on the user's face may be one or both of the user's eyes.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: September 21, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Scott Beith, Ananthapadmanabhan Arasanipalai Kandhadai, Jonathan Kies, Robert Tartz
  • Patent number: 11126258
    Abstract: Various embodiments include processing devices and methods for managing multisensor inputs on a mobile computing device. Various embodiments may include receiving multiple inputs from multiple touch sensors, identifying types of user interactions with the touch sensors from the multiple inputs, identifying sensor input data in a multisensor input data structure corresponding with the types of user interactions, and determining whether the multiple inputs combine as a multisensor input in an entry in the multisensor input data structure having the sensor input data related to a multisensor input response. Various embodiments may include detecting a trigger for a multisensor input mode, entering the multisensor input mode in response to detecting the trigger, and enabling processing of an input from a touch sensor.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: September 21, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Robyn Teresa Oliver, Robert Tartz, Douglas Brems, Suhail Jalil, Jonathan Kies
  • Publication number: 20210278896
    Abstract: Various embodiments include processing devices and methods for managing multisensor inputs on a mobile computing device. Various embodiments may include receiving multiple inputs from multiple touch sensors, identifying types of user interactions with the touch sensors from the multiple inputs, identifying sensor input data in a multisensor input data structure corresponding with the types of user interactions, and determining whether the multiple inputs combine as a multisensor input in an entry in the multisensor input data structure having the sensor input data related to a multisensor input response. Various embodiments may include detecting a trigger for a multisensor input mode, entering the multisensor input mode in response to detecting the trigger, and enabling processing of an input from a touch sensor.
    Type: Application
    Filed: May 21, 2021
    Publication date: September 9, 2021
    Inventors: Robyn Teresa OLIVER, Robert Tartz, Douglas Brems, Suhail Jalil, Jonathan Kies
  • Publication number: 20210278897
    Abstract: Various embodiments include processing devices and methods for managing multisensor inputs on a mobile computing device. Various embodiments may include receiving multiple inputs from multiple touch sensors, identifying types of user interactions with the touch sensors from the multiple inputs, identifying sensor input data in a multisensor input data structure corresponding with the types of user interactions, and determining whether the multiple inputs combine as a multisensor input in an entry in the multisensor input data structure having the sensor input data related to a multisensor input response. Various embodiments may include detecting a trigger for a multisensor input mode, entering the multisensor input mode in response to detecting the trigger, and enabling processing of an input from a touch sensor.
    Type: Application
    Filed: May 21, 2021
    Publication date: September 9, 2021
    Inventors: Robyn Teresa OLIVER, Robert TARTZ, Douglas BREMS, Suhail JALIL, Jonathan KIES
  • Publication number: 20210183343
    Abstract: A head-mounted device may include a processor configured to receive information from a sensor that is indicative of a position of the head-mounted device relative to a reference point on a face of a user; and adjust a rendering of an item of virtual content based on the position or a change in the position of the device relative to the face. The sensor may be distance sensor, and the processor may be configured to adjust the rendering of the item of virtual content based a measured distance or change of distance between the head-mounted device and the point of reference on the user's face. The point of reference on the user's face may be one or both of the user's eyes.
    Type: Application
    Filed: December 13, 2019
    Publication date: June 17, 2021
    Inventors: Scott BEITH, Ananthapadmanabhan Arasanipalai KANDHADAI, Jonathan KIES, Robert TARTZ
  • Publication number: 20210109611
    Abstract: In some embodiments, a processor of the mobile computing device may receive an input for performing a function with respect to content at the mobile device in which the content at the mobile device is segmented into at least a first command layer having one or more objects and a second command layer having one or more objects. The processor may determine whether the received input is associated with a first object of the first command layer or a second object of the second command layer. The processor may determine a function to be performed on one of the first or second objects based on whether the first command layer or the second command layer is determined to be associated with the received input, and the processor may perform the determined function on the first object or the second object.
    Type: Application
    Filed: December 21, 2020
    Publication date: April 15, 2021
    Inventors: Jonathan KIES, Robyn Teresa OLIVER, Robert TARTZ, Douglas BREMS, Suhail JALIL
  • Publication number: 20210084259
    Abstract: A device may be configured to determining display properties for virtual content in an environment with a plurality of physical participants by capturing an image of the environment, analyzing the captured image to identify at least one object in the environment, determining a parameter for the identified object, and determining a display property of a digital representation of virtual content based on the determined parameter. Embodiments may include negotiating display properties with other devices to generate coordinated display properties, and rendering the digital representation of the virtual content so that the remote participant appears to be in the same fixed position to all co-located participants and sized consistent with the co-located participants.
    Type: Application
    Filed: September 16, 2019
    Publication date: March 18, 2021
    Inventors: Jonathan KIES, Robert TARTZ, Daniel James GUEST
  • Publication number: 20210065455
    Abstract: Systems, apparatuses (or devices), methods, and computer-readable media are provided for generating virtual content. For example, a device (e.g., an extended reality device) can obtain an image of a scene of a real-world environment, wherein the real-world environment is viewable through a display of the extended reality device as virtual content is displayed by the display. The device can detect at least a part of a physical hand of a user in the image. The device can generate a virtual keyboard based on detecting at least the part of the physical hand. The device can determine a position for the virtual keyboard on the display of the extended reality device relative to at least the part of the physical hand. The device can display the virtual keyboard at the position on the display.
    Type: Application
    Filed: September 2, 2020
    Publication date: March 4, 2021
    Inventors: Scott BEITH, Jonathan KIES, Robert TARTZ, Ananthapadmanabhan Arasanipalai KANDHADAI
  • Publication number: 20210034222
    Abstract: Techniques are provided for integrating mobile device and extended reality experiences. Extended reality technologies can include virtual reality (VR), augmented reality (AR), mixed reality (MR), etc. In some examples, a synthetic (or virtual) representation of a device (e.g., a mobile device, such as a mobile phone or other type of device) can be generated and displayed along with VR content being displayed by a VR device (e.g., a head-mounted display (HMD)). In another example, content from the device (e.g., visual content being displayed and/or audio content being played by the device) can be output along with VR content being displayed by the VR device. In another example, one or more images captured by a camera of the device and/or audio obtained by a microphone of the device can be obtained from the device by a virtual reality device and can be output by the virtual reality device.
    Type: Application
    Filed: January 13, 2020
    Publication date: February 4, 2021
    Inventors: Douglas BREMS, Robert TARTZ, Robyn Teresa OLIVER
  • Patent number: 10901606
    Abstract: In some embodiments, a processor of the mobile computing device may receive an input for performing a function with respect to content at the mobile device in which the content at the mobile device is segmented into at least a first command layer having one or more objects and a second command layer having one or more objects. The processor may determine whether the received input is associated with a first object of the first command layer or a second object of the second command layer. The processor may determine a function to be performed on one of the first or second objects based on whether the first command layer or the second command layer is determined to be associated with the received input, and the processor may perform the determined function on the first object or the second object.
    Type: Grant
    Filed: August 13, 2018
    Date of Patent: January 26, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Jonathan Kies, Robyn Teresa Oliver, Robert Tartz, Douglas Brems, Suhail Jalil