Patents by Inventor Chetan Mugur Nagaraj

Chetan Mugur Nagaraj has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10147237
    Abstract: Systems described herein apply visual computer-generated elements into real-world images with an appearance of depth by using information available via conventional mobile devices. The systems receive a reference image and reference image data collected contemporaneously with the reference image. The reference image data includes a geo-location, a direction heading, and a tilt. The systems identify one or more features within the reference image and receive a user's selection of a foreground feature from the one or more features. The systems receive a virtual object definition that includes an object type, a size, and an overlay position of the virtual object relative to the foreground feature. The virtual object is provided in the virtual layer appearing behind the foreground feature. The systems store, in a memory, the reference image data associated with the virtual object definition.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: December 4, 2018
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Manish Sharma, Anand Prakash, Devin Blong, Qing Zhang, Chetan Mugur Nagaraj, Srivatsan Rangarajan
  • Patent number: 10115236
    Abstract: Systems described herein allow for placement and presentation of virtual objects using mobile devices with a single camera lens. A device receives, from a first mobile device, a target image captured from a camera and target image data collected contemporaneously with the target image. The target image data includes a geographic location, a direction heading, and a tilt. The device receives, from the first mobile device, a first virtual object definition that includes an object type, a size, and a mobile device orientation for presenting a first virtual object within a video feed. The device generates a simplified model of the target image, and stores the first virtual object definition associated with the target image data and the simplified model of the target image. The device uploads the first virtual object definition and the target image data, so the first virtual object is discoverable by a second mobile device.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: October 30, 2018
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Manish Sharma, Anand Prakash, Devin Blong, Qing Zhang, Chetan Mugur Nagaraj, Srivatsan Rangarajan, Eric Miller
  • Patent number: 10109073
    Abstract: A mobile device stores a target image and target image data collected contemporaneously with the target image. The mobile device receives a reference position indication that corresponds to the target image data and receives a video feed from a camera while the mobile device is in the reference position. The mobile device detects a match between a first image from the video feed and the target image, unlocks an augmented reality space, and instructs presentation of a virtual object within the augmented reality space. The mobile device receives sensor data and a continuing video feed from the camera, compares a second image from the continuing video feed with the first image, and identify common features in the first and second images. The mobile device detects a location change based on the sensor data and changes in the common features between the first and second images.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: October 23, 2018
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Manish Sharma, Anand Prakash, Devin Blong, Qing Zhang, Chetan Mugur Nagaraj, Srivatsan Rangarajan
  • Publication number: 20180082117
    Abstract: Systems described herein apply visual computer-generated elements into real-world images with an appearance of depth by using information available via conventional mobile devices. The systems receive a reference image and reference image data collected contemporaneously with the reference image. The reference image data includes a geo-location, a direction heading, and a tilt. The systems identify one or more features within the reference image and receive a user's selection of a foreground feature from the one or more features. The systems receive a virtual object definition that includes an object type, a size, and an overlay position of the virtual object relative to the foreground feature. The virtual object is provided in the virtual layer appearing behind the foreground feature. The systems store, in a memory, the reference image data associated with the virtual object definition.
    Type: Application
    Filed: September 21, 2016
    Publication date: March 22, 2018
    Inventors: Manish Sharma, Anand Prakash, Devin Blong, Qing Zhang, Chetan Mugur Nagaraj, Srivatsan Rangarajan
  • Publication number: 20180082475
    Abstract: Systems described herein allow for placement and presentation of virtual objects using mobile devices with a single camera lens. A device receives, from a first mobile device, a target image captured from a camera and target image data collected contemporaneously with the target image. The target image data includes a geographic location, a direction heading, and a tilt. The device receives, from the first mobile device, a first virtual object definition that includes an object type, a size, and a mobile device orientation for presenting a first virtual object within a video feed. The device generates a simplified model of the target image, and stores the first virtual object definition associated with the target image data and the simplified model of the target image. The device uploads the first virtual object definition and the target image data, so the first virtual object is discoverable by a second mobile device.
    Type: Application
    Filed: September 21, 2016
    Publication date: March 22, 2018
    Inventors: Manish Sharma, Anand Prakash, Devin Blong, Qing Zhang, Chetan Mugur Nagaraj, Srivatsan Rangarajan, Eric Miller
  • Publication number: 20180082430
    Abstract: A mobile device stores a target image and target image data collected contemporaneously with the target image. The mobile device receives a reference position indication that corresponds to the target image data and receives a video feed from a camera while the mobile device is in the reference position. The mobile device detects a match between a first image from the video feed and the target image, unlocks an augmented reality space, and instructs presentation of a virtual object within the augmented reality space. The mobile device receives sensor data and a continuing video feed from the camera, compares a second image from the continuing video feed with the first image, and identify common features in the first and second images. The mobile device detects a location change based on the sensor data and changes in the common features between the first and second images.
    Type: Application
    Filed: September 21, 2016
    Publication date: March 22, 2018
    Inventors: Manish Sharma, Anand Prakash, Devin Blong, Qing Zhang, Chetan Mugur Nagaraj, Srivatsan Rangarajan