Patents by Inventor Amitabh Varshney

Amitabh Varshney has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11087549
    Abstract: Systems, methods, apparatuses, and computer program products for creating freely explorable, dynamic and photorealistic virtual environments, reconstructing view dependent holograms in real-time, and inserting 3D virtual objects into 360 camera based navigable environment. A method, may include simultaneously capturing 360 video data and audio data from a plurality of viewpoints within a real-world environment. The method may also include preprocessing and compressing the 360 video data and the audio data into a three-dimensional representation suitable for display. The method may further include rendering a virtual environment of the real-world environment. In addition, the method may include creating a blended virtual environment by combining the captured 360 video data and the audio data with the rendered virtual environment. Further, the method may include displaying the blended virtual environment in a display apparatus of a user.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: August 10, 2021
    Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARK
    Inventors: Amitabh Varshney, Eric C. Lee, Sida Li
  • Patent number: 11087553
    Abstract: An end-user system in accordance with the present disclosure includes a communication device configured to communicate with a server, a display screen, one or more processors, and at least one memory storing instructions which, when executed by the processor(s), cause the end-user system to access a physical world geographical location from a user, access two-dimensional physical world map data for a region surrounding the physical world geographical location, render for display on the display screen a three-dimensional mirrored world portion based on the two-dimensional physical world map data and render an avatar at a mirrored world location corresponding to the physical world geographical location, access geotagged social media posts which have geotags in the region and which the user is permitted to view, and render the geotagged social media posts as three-dimensional objects in the mirrored world portion.
    Type: Grant
    Filed: January 3, 2020
    Date of Patent: August 10, 2021
    Assignee: University of Maryland, College Park
    Inventors: Amitabh Varshney, Ruofei Du
  • Publication number: 20200219323
    Abstract: An end-user system in accordance with the present disclosure includes a communication device configured to communicate with a server, a display screen, one or more processors, and at least one memory storing instructions which, when executed by the processor(s), cause the end-user system to access a physical world geographical location from a user, access two-dimensional physical world map data for a region surrounding the physical world geographical location, render for display on the display screen a three-dimensional mirrored world portion based on the two-dimensional physical world map data and render an avatar at a mirrored world location corresponding to the physical world geographical location, access geotagged social media posts which have geotags in the region and which the user is permitted to view, and render the geotagged social media posts as three-dimensional objects in the mirrored world portion.
    Type: Application
    Filed: January 3, 2020
    Publication date: July 9, 2020
    Inventors: Amitabh Varshney, Ruofei Du
  • Publication number: 20200118342
    Abstract: Systems, methods, apparatuses, and computer program products for creating freely explorable, dynamic and photorealistic virtual environments, reconstructing view dependent holograms in real-time, and inserting 3D virtual objects into 360 camera based navigable environment. A method, may include simultaneously capturing 360 video data and audio data from a plurality of viewpoints within a real-world environment. The method may also include preprocessing and compressing the 360 video data and the audio data into a three-dimensional representation suitable for display. The method may further include rendering a virtual environment of the real-world environment. In addition, the method may include creating a blended virtual environment by combining the captured 360 video data and the audio data with the rendered virtual environment. Further, the method may include displaying the blended virtual environment in a display apparatus of a user.
    Type: Application
    Filed: October 15, 2019
    Publication date: April 16, 2020
    Inventors: Amitabh VARSHNEY, Eric C. LEE, Sida LI
  • Publication number: 20190350671
    Abstract: Systems, methods, apparatuses, and computer program products for managing building energy utilization are provided. One method may include detecting movement of the catheter as it is being inserted into an object, and calculating a location of an area of the catheter that is embedded in the object. The method may also include generating a virtual image of the embedded portion of the catheter based on the detected movement of the catheter and location of the area of the catheter that is embedded. The method may further include transmitting the virtual image of the embedded area of the catheter to a display unit. Further, the method may include overlaying the virtual image in a user's field of view in the display unit to mimic the position of the entire catheter including the area embedded in the object.
    Type: Application
    Filed: May 21, 2019
    Publication date: November 21, 2019
    Inventors: Amitabh VARSHNEY, Xuetong SUN, Sarah MURTHI, Gary SCHWARTZBAUER
  • Patent number: 10380726
    Abstract: Disclosed are systems, devices, and methods for creating a rendering of real-world locations with embedded multimedia elements. An exemplary method includes receiving image data of a real-world location, identifying geographic coordinates of the real-world location and/or a point of view from which the image data was acquired, acquiring multimedia elements relevant to the real-world location based on the geographic coordinates and/or the point of view, and creating a rendering of the image data with the multimedia elements embedded therein.
    Type: Grant
    Filed: March 21, 2016
    Date of Patent: August 13, 2019
    Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARK
    Inventors: Ruofei Du, Amitabh Varshney
  • Publication number: 20180322689
    Abstract: Various communication systems may benefit from improved depth perception of images. For example, it may be helpful to enhance depth perception of a three-dimensional rendering of an image. A method, according to certain embodiments, may include acquiring at an apparatus a pair of images. The method may also include computing an energy map based on one or more kinetic parameters of the pair of images. In addition, the method may include generating a kinetic depth image based on the energy map.
    Type: Application
    Filed: May 7, 2018
    Publication date: November 8, 2018
    Inventors: Sujal BISTA, Amitabh VARSHNEY
  • Publication number: 20180075591
    Abstract: Disclosed are systems, devices, and methods for creating a rendering of real-world locations with embedded multimedia elements. An exemplary method includes receiving image data of a real-world location, identifying geographic coordinates of the real-world location and/or a point of view from which the image data was acquired, acquiring multimedia elements relevant to the real-world location based on the geographic coordinates and/or the point of view, and creating a rendering of the image data with the multimedia elements embedded therein.
    Type: Application
    Filed: March 21, 2016
    Publication date: March 15, 2018
    Inventors: Ruofei DU, Amitabh VARSHNEY
  • Patent number: 8243068
    Abstract: A method, system and apparatus for determining and modifying saliency of a visual medium are provided. The method, system and apparatus may obtain saliency values for a visual medium based on a plurality of visual channels. The saliency values may be obtained based on at least one of computer-generated modeling, user-specified input and eye-tracking. The method, system and apparatus may aggregate the obtained saliency values and classify regions of the visual medium based on the aggregated saliency values. The visual channels may include one or more of absolute mean curvature, a gradient of mean curvature, a gradient of color intensity, color luminance, color opponency, color saturation, lighting and focus. When calculating mean curvature, the method, system and apparatus may calculate a change in mean curvature for a plurality of vertices around a region and displace the vertices in accordance with the calculated change in mean curvature to change a saliency of the region.
    Type: Grant
    Filed: May 19, 2008
    Date of Patent: August 14, 2012
    Assignee: University of Maryland
    Inventors: Amitabh Varshney, Youngmin Kim, Cheuk Yiu Ip
  • Patent number: 8219670
    Abstract: A multifunctional interaction system which is capable of spatio-temporal context localization of users and of communication of audio/video streams to an entity of interest defined by the user, includes a communication domain supporting a predefined localization service, a server associated with the communication domain, client devices, and a dynamically changing context database which is customized in accord with the dynamics of interaction sessions of client devices with the server. The client communicates with the system to either request services therefrom or to send a message to the entity of interest. The system is provided with a panic alert mechanism which, upon actuation, transmits an audio/video data stream along with the client location tag, time stamp, and client ID, to a police precinct for prompt action.
    Type: Grant
    Filed: November 10, 2008
    Date of Patent: July 10, 2012
    Assignee: University of Maryland
    Inventors: Ashok K. Agrawala, Amitabh Varshney, Christian B. Almazan
  • Publication number: 20090125584
    Abstract: A multifunctional interaction system which is capable of spatio-temporal context localization of users and of communication of audio/video streams to an entity of interest defined by the user, includes a communication domain supporting a predefined localization service, a server associated with the communication domain, client devices, and a dynamically changing context database which is customized in accord with the dynamics of interaction sessions of client devices with the server. The client communicates with the system to either request services therefrom or to send a message to the entity of interest. The system is provided with a panic alert mechanism which, upon actuation, transmits an audio/video data stream along with the client location tag, time stamp, and client ID, to a police precinct for prompt action.
    Type: Application
    Filed: November 10, 2008
    Publication date: May 14, 2009
    Applicant: UNIVERSITY OF MARYLAND
    Inventors: ASHOK K. AGRAWALA, AMITABH VARSHNEY, CHRISTIAN B. ALMAZAN
  • Publication number: 20090092314
    Abstract: A method, system and apparatus for determining and modifying saliency of a visual medium are provided. The method, system and apparatus may obtain saliency values for a visual medium based on a plurality of visual channels. The saliency values may be obtained based on at least one of computer-generated modeling, user-specified input and eye-tracking. The method, system and apparatus may aggregate the obtained saliency values and classify regions of the visual medium based on the aggregated saliency values. The visual channels may include one or more of absolute mean curvature, a gradient of mean curvature, a gradient of color intensity, color luminance, color opponency, color saturation, lighting and focus. When calculating mean curvature, the method, system and apparatus may calculate a change in mean curvature for a plurality of vertices around a region and displace the vertices in accordance with the calculated change in mean curvature to change a saliency of the region.
    Type: Application
    Filed: May 19, 2008
    Publication date: April 9, 2009
    Inventors: Amitabh Varshney, Youngmin Kim, Cheuk Yiu Ip
  • Patent number: 6407679
    Abstract: A system and method for entering text in a virtual environment by sensory gloves. The user enters a key that represents one or more letters by simulating a press of a keyboard in the gloves. The user calibrates the gloves by entering text, during which time the system establishes threshold values that represent simulated presses for each finger. After the initial calibration of the sensory gloves, the user enters text with simulated finger presses. The system distinguishes which movements are intended as simulated finger presses by examining the relative motions of fingers and maintaining dynamic thresholds. Errors are alleviated by providing feedback to the user, such as beeps and a visual display of the fingers and the current text. Because keys may represent more than one character, the system determines the intended text by probabilistic analysis and the Viterbi algorithm.
    Type: Grant
    Filed: July 30, 1999
    Date of Patent: June 18, 2002
    Assignee: The Research Foundation of the State University of New York
    Inventors: Francine Evans, Steven Skiena, Amitabh Varshney