Patents by Inventor Sai Durga Venkat Reddy Pulikunta

Sai Durga Venkat Reddy Pulikunta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240028111
    Abstract: Systems and methods are described for extended reality environment interaction. An extended reality environment including an object is generated for display, and a first sensor is used to detect that a gaze has shifted from a first portion of the extended reality environment to a second portion of the extended reality environment, where the object is excluded from the first portion of the extended reality environment and included in the second portion of the extended reality environment. An indicator of the shift in the gaze is generated for display within the extended reality environment in response to detecting the gaze shift, and a voice command is detected by a second sensor while the indicator is in a vicinity of the object. In response to detecting the voice command, an action corresponding to the voice command may be executed.
    Type: Application
    Filed: July 24, 2023
    Publication date: January 25, 2024
    Inventors: R Balaji, Sai Durga Venkat Reddy Pulikunta, Jeffry Copps Robert Jose, Arun Kumar T V
  • Patent number: 11747896
    Abstract: Systems and methods are described for extended reality environment interaction. An extended reality environment including an object is generated for display, and a first sensor is used to detect that a gaze has shifted from a first portion of the extended reality environment to a second portion of the extended reality environment, where the object is excluded from the first portion of the extended reality environment and included in the second portion of the extended reality environment. An indicator of the shift in the gaze is generated for display within the extended reality environment in response to detecting the gaze shift, and a voice command is detected by a second sensor while the indicator is in a vicinity of the object. In response to detecting the voice command, an action corresponding to the voice command may be executed.
    Type: Grant
    Filed: October 20, 2020
    Date of Patent: September 5, 2023
    Assignee: Rovi Guides, Inc.
    Inventors: R Balaji, Sai Durga Venkat Reddy Pulikunta, Jeffry Copps Robert Jose, Arun Kumar T V
  • Publication number: 20220319510
    Abstract: Systems and methods are described herein for disambiguating a voice search query by determining whether the user made a gesture while speaking a quotation from a content item and whether the user mimicked or approximated a gesture made by a character in the content item when the character spoke the words quoted by the user. If so, a search result comprising an identifier of the content item is generated. A search result representing the content item from which the quotation comes may be ranked highest among other search results returned and therefore presented first in a list of search results. If the user did not mimic or approximate a gesture made by a character in the content item when the quotation is spoken in the content item, then a search result may not be generated for the content item or may be ranked lowest among other search results.
    Type: Application
    Filed: December 10, 2021
    Publication date: October 6, 2022
    Inventors: Ankur Anil Aher, Nishchit Mahajan, Narendra Purushothama, Sai Durga Venkat Reddy Pulikunta
  • Publication number: 20220121275
    Abstract: Systems and methods are described for extended reality environment interaction. An extended reality environment including an object is generated for display, and a first sensor is used to detect that a gaze has shifted from a first portion of the extended reality environment to a second portion of the extended reality environment, where the object is excluded from the first portion of the extended reality environment and included in the second portion of the extended reality environment. An indicator of the shift in the gaze is generated for display within the extended reality environment in response to detecting the gaze shift, and a voice command is detected by a second sensor while the indicator is in a vicinity of the object. In response to detecting the voice command, an action corresponding to the voice command may be executed.
    Type: Application
    Filed: October 20, 2020
    Publication date: April 21, 2022
    Inventors: R Balaji, Sai Durga Venkat Reddy Pulikunta, Jeffry Copps Robert Jose, Arun Kumar T V
  • Patent number: 11281291
    Abstract: Systems and methods are described for extended reality environment interaction. An extended reality environment including an object is generated for display, and a sensor is used to detect a gaze directed to a first portion of the extended reality environment, where the object is included in the first portion of the extended reality environment. Opacity-based indicators are generated for display in the vicinity of the first portion of the extended reality environment, and a boundary of the object is identified. Based on the identified boundary of the object, an opacity of the at least one of the opacity-based indicators is varied.
    Type: Grant
    Filed: October 20, 2020
    Date of Patent: March 22, 2022
    Assignee: ROVI GUIDES, INC.
    Inventors: R Balaji, Sai Durga Venkat Reddy Pulikunta, Jeffry Copps Robert Jose, Arun Kumar T V
  • Patent number: 11227593
    Abstract: Systems and methods are described herein for disambiguating a voice search query by determining whether the user made a gesture while speaking a quotation from a content item and whether the user mimicked or approximated a gesture made by a character in the content item when the character spoke the words quoted by the user. If so, a search result comprising an identifier of the content item is generated. A search result representing the content item from which the quotation comes may be ranked highest among other search results returned and therefore presented first in a list of search results. If the user did not mimic or approximate a gesture made by a character in the content item when the quotation is spoken in the content item, then a search result may not be generated for the content item or may be ranked lowest among other search results.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: January 18, 2022
    Assignee: ROVI GUIDES, INC.
    Inventors: Ankur Aher, Nishchit Mahajan, Narendra Purushothama, Sai Durga Venkat Reddy Pulikunta
  • Publication number: 20200410995
    Abstract: Systems and methods are described herein for disambiguating a voice search query by determining whether the user made a gesture while speaking a quotation from a content item and whether the user mimicked or approximated a gesture made by a character in the content item when the character spoke the words quoted by the user. If so, a search result comprising an identifier of the content item is generated. A search result representing the content item from which the quotation comes may be ranked highest among other search results returned and therefore presented first in a list of search results. If the user did not mimic or approximate a gesture made by a character in the content item when the quotation is spoken in the content item, then a search result may not be generated for the content item or may be ranked lowest among other search results.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Ankur Aher, Nishchit Mahajan, Narendra Purushothama, Sai Durga Venkat Reddy Pulikunta