Patents by Inventor Lama Hewage Ravi Prathapa Chandrasiri

Lama Hewage Ravi Prathapa Chandrasiri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190182471
    Abstract: An exemplary virtual reality media provider system differentiates static objects depicted in two-dimensional video data from dynamic objects depicted in the two-dimensional video data. Based on the differentiating of the static objects from the dynamic objects, the virtual reality media provider system generates dynamic volumetric models of the surfaces of the static objects and the dynamic objects. The virtual reality media provider system updates the dynamic volumetric models of the surfaces of the static objects with a lower regularity or on an as-needed basis, and the virtual reality media provider system separately updates the dynamic volumetric models of the surfaces of the dynamic objects with a higher regularity. The higher regularity is higher than the lower regularity and keeps the dynamic volumetric models of the surfaces of the dynamic objects up-to-date with what is occurring in the two-dimensional video data. Corresponding methods and systems are also disclosed.
    Type: Application
    Filed: February 20, 2019
    Publication date: June 13, 2019
    Inventors: Mohammad Raheel Khalid, Ali Jaafar, Denny Breitenfeld, Xavier Hansen, Christian Egeler, Syed Kamal, Lama Hewage Ravi Prathapa Chandrasiri, Steven L. Smith
  • Publication number: 20190156565
    Abstract: An exemplary virtual reality media provider system receives two-dimensional (“2D”) video data for surfaces of first and second objects located in a natural setting. The 2D video data is captured by first and second capture devices disposed at different positions with respect to the objects. The system distinguishes the first object from the second object by performing a plurality of techniques in combination with one another. The plurality of techniques include determining that the first object is moving in relation to the second object; and determining that, from a vantage point of at least one of the different positions, a representation of the first object captured within the 2D video data does not overlap with a representation of the second object. Based on the received 2D video data and the distinguishing of the first and second objects, the system generates an individually-manipulable volumetric model of the first object.
    Type: Application
    Filed: December 27, 2018
    Publication date: May 23, 2019
    Inventors: Mohammad Raheel Khalid, Ali Jaafar, Denny Breitenfeld, Xavier Hansen, Christian Egeler, Syed Kamal, Lama Hewage Ravi Prathapa Chandrasiri, Steven L. Smith
  • Publication number: 20190146585
    Abstract: An exemplary virtual object presentation system accesses depth data for surfaces of a three-dimensional (“3D”) virtual object. The system determines, based on the depth data, a set of element configuration operations that, when performed by field formation elements included within an array of reconfigurable field formation elements, form a field in accordance with the depth data. Specifically, the field formed is to be haptically perceptible to a user using a field perception apparatus. The system may direct the field formation elements included within the array of reconfigurable field formation elements to perform the set of element configuration operations to thereby form the haptically perceptible field. In this way, the array of reconfigurable field formation elements generates a haptically perceptible virtual object representative of the 3D virtual object for perception by the user using the field perception apparatus. Corresponding methods are also disclosed.
    Type: Application
    Filed: November 14, 2017
    Publication date: May 16, 2019
    Inventors: Mohammad Raheel Khalid, Syed Meeran Kamal, Lama Hewage Ravi Prathapa Chandrasiri
  • Patent number: 10271033
    Abstract: An exemplary depth data generation system (“system”) accesses a first depth map and a second depth map of surfaces of objects included in a real-world scene. The first and second depth maps are captured independently from one another. The system converges the first and second depth maps into a converged depth map of the surfaces of the objects included in the real-world scene. More specifically, the converging comprises assigning a first confidence value to a first depth data point in the first depth map, assigning a second confidence value to a second depth data point in the second depth map, and generating a third depth data point representing a same particular physical point as the first and second depth data points based on the first and second confidence values and on at least one of the first depth data point and the second depth data point.
    Type: Grant
    Filed: October 31, 2016
    Date of Patent: April 23, 2019
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Syed Meeran Kamal, Steven L. Smith, Yongsheng Pan, Sergey Virodov, Jonathan A. Globerson, Lama Hewage Ravi Prathapa Chandrasiri, Mohammad Raheel Khalid
  • Patent number: 10257490
    Abstract: An exemplary virtual reality media provider system (“system”) includes a configuration of synchronous video and depth capture devices disposed at fixed positions at a real-world event. In real time, the video and depth capture devices capture two-dimensional video data and depth data for surfaces of objects at the real-world event. The system generates a real-time volumetric data stream representative of a dynamic volumetric model of the surfaces of the objects at the real-world event in real time based on the captured two-dimensional video data and captured depth data. The dynamic volumetric model of the surfaces of the objects at the real-world event is configured to be used to generate virtual reality media content representative of the real-world event as experienced from a dynamically selectable viewpoint corresponding to an arbitrary location at the real-world event and selected by a user experiencing the real-world event using a media player device.
    Type: Grant
    Filed: April 28, 2016
    Date of Patent: April 9, 2019
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Mohammad Raheel Khalid, Ali Jaafar, Denny Breitenfeld, Xavier Hansen, Christian Egeler, Syed Kamal, Lama Hewage Ravi Prathapa Chandrasiri, Steven L. Smith
  • Patent number: 10204444
    Abstract: An exemplary virtual reality media provider system (“system”) includes a configuration of synchronous video and depth capture devices disposed at fixed positions in a vicinity of a first object located in a natural setting along with one or more additional objects. The video and depth capture devices capture two-dimensional video data and depth data for a surface the first object. The system distinguishes the first object from a second object included in the one or more additional objects located in the natural setting and generates an individually-manipulable volumetric model of the first object. The individually-manipulable volumetric model of the first object is configured to be individually manipulated with respect to an immersive virtual reality world while a user of a media player device is experiencing the immersive virtual reality world using the media player device.
    Type: Grant
    Filed: April 28, 2016
    Date of Patent: February 12, 2019
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Mohammad Raheel Khalid, Ali Jaafar, Denny Breitenfeld, Xavier Hansen, Christian Egeler, Syed Kamal, Lama Hewage Ravi Prathapa Chandrasiri, Steven L. Smith
  • Publication number: 20180309979
    Abstract: An exemplary depth capture system emits a first structured light pattern onto a surface of an object included in a real-world scene using a first structured light emitter included within the depth capture system. The depth capture system also emits, concurrently with the emitting of the first structured light pattern, a second structured light pattern onto the surface of the object using a second structured light emitter included within the depth capture system. The emitting of the second structured light pattern is different than the emitting of the first structured light pattern. The depth capture system detects the first and second structured light patterns using one or more optical sensors included within the depth capture system. Based on the detection of the structured light patterns, the depth capture system generates depth data representative of the surface of the object included in the real-world scene.
    Type: Application
    Filed: June 27, 2018
    Publication date: October 25, 2018
    Inventors: Steven L. Smith, Syed Meeran Kamal, Yongsheng Pan, Lama Hewage Ravi Prathapa Chandrasiri, Sergey Virodov, Mohammad Raheel Khalid
  • Patent number: 10033988
    Abstract: An exemplary depth capture system (“system”) emits, from a first fixed position with respect to a real-world scene and within a first frequency band, a first structured light pattern onto surfaces of objects included in a real-world scene. The system also emits, from a second fixed position with respect to the real-world scene and within a second frequency band, a second structured light pattern onto the surfaces of the objects. The system detects the first and second structured light patterns using one or more optical sensors by way of first and second optical filters, respectively. The first and second optical filters are each configured to only pass one of the structured light patterns and to block the other. Based on the detection of the structured light patterns, the system generates depth data representative of the surfaces of the objects included in the real-world scene.
    Type: Grant
    Filed: October 31, 2016
    Date of Patent: July 24, 2018
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Steven L. Smith, Syed Meeran Kamal, Yongsheng Pan, Lama Hewage Ravi Prathapa Chandrasiri, Sergey Virodov, Mohammad Raheel Khalid
  • Publication number: 20180124382
    Abstract: An exemplary depth capture system (“system”) emits, from a first fixed position with respect to a real-world scene and within a first frequency band, a first structured light pattern onto surfaces of objects included in a real-world scene. The system also emits, from a second fixed position with respect to the real-world scene and within a second frequency band, a second structured light pattern onto the surfaces of the objects. The system detects the first and second structured light patterns using one or more optical sensors by way of first and second optical filters, respectively. The first and second optical filters are each configured to only pass one of the structured light patterns and to block the other. Based on the detection of the structured light patterns, the system generates depth data representative of the surfaces of the objects included in the real-world scene.
    Type: Application
    Filed: October 31, 2016
    Publication date: May 3, 2018
    Inventors: Steven L. Smith, Syed Meeran Kamal, Yongsheng Pan, Lama Hewage Ravi Prathapa Chandrasiri, Sergey Virodov, Mohammad Raheel Khalid
  • Publication number: 20180124371
    Abstract: An exemplary depth data generation system (“system”) accesses a first depth map and a second depth map of surfaces of objects included in a real-world scene. The first and second depth maps are captured independently from one another. The system converges the first and second depth maps into a converged depth map of the surfaces of the objects included in the real-world scene. More specifically, the converging comprises assigning a first confidence value to a first depth data point in the first depth map, assigning a second confidence value to a second depth data point in the second depth map, and generating a third depth data point representing a same particular physical point as the first and second depth data points based on the first and second confidence values and on at least one of the first depth data point and the second depth data point.
    Type: Application
    Filed: October 31, 2016
    Publication date: May 3, 2018
    Inventors: Syed Meeran Kamal, Steven L. Smith, Yongsheng Pan, Sergey Virodov, Jonathan A. Globerson, Lama Hewage Ravi Prathapa Chandrasiri, Mohammad Raheel Khalid
  • Publication number: 20180091866
    Abstract: A first communication device communicatively coupled with a second communication device by way of a first network interface and by way of a second network interface parallel to the first network interface prepares object data in accordance with a data partitioning protocol for transmission to the second communication device. The first communication device transmits the prepared object data to the second communication device at an overall data transfer rate that is at least as great as a sum of first and second data transfer rates associated, respectively, with the first and second network interfaces by concurrently transmitting first and second portions of the prepared object data by way of the first and second network interfaces and at the first and second data transfer rates, respectively. Corresponding methods and devices for receiving concurrently transmitted object data by way of parallel network interfaces are also disclosed.
    Type: Application
    Filed: September 23, 2016
    Publication date: March 29, 2018
    Inventors: Dan Sun, Syed Kamal, Lama Hewage Ravi Prathapa Chandrasiri, Mohammad Raheel Khalid, Christian Egeler
  • Publication number: 20170316608
    Abstract: An exemplary method includes a media player device (“device”) providing a user with an immersive virtual reality experience in accordance with a specification file corresponding to the immersive virtual reality experience. The specification file includes data that defines a plurality of elements included in the immersive virtual reality experience by providing a plurality of links for use by the device in acquiring the plurality of elements while providing the user with the immersive virtual reality experience. The method further includes the device detecting, while the immersive virtual reality experience is being provided to the user, real-world input associated with the user, and integrating the real-world input into the immersive virtual reality experience by updating the specification file to further include data that defines the real-world input as a user-specific element that is specific to the user and that is included in the immersive virtual reality experience.
    Type: Application
    Filed: April 28, 2016
    Publication date: November 2, 2017
    Inventors: Mohammad Raheel Khalid, Ali Jaafar, Denny Breitenfeld, Xavier Hansen, Christian Egeler, Syed Kamal, Lama Hewage Ravi Prathapa Chandrasiri, Steven L. Smith
  • Publication number: 20170316606
    Abstract: An exemplary virtual reality media provider system (“system”) includes a configuration of synchronous video and depth capture devices disposed at fixed positions in a vicinity of a first object located in a natural setting along with one or more additional objects. The video and depth capture devices capture two-dimensional video data and depth data for a surface the first object. The system distinguishes the first object from a second object included in the one or more additional objects located in the natural setting and generates an individually-manipulable volumetric model of the first object. The individually-manipulable volumetric model of the first object is configured to be individually manipulated with respect to an immersive virtual reality world while a user of a media player device is experiencing the immersive virtual reality world using the media player device.
    Type: Application
    Filed: April 28, 2016
    Publication date: November 2, 2017
    Inventors: Mohammad Raheel Khalid, Ali Jaafar, Denny Breitenfeld, Xavier Hansen, Christian Egeler, Syed Kamal, Lama Hewage Ravi Prathapa Chandrasiri, Steven L. Smith
  • Publication number: 20170318275
    Abstract: An exemplary virtual reality media provider system (“system”) includes a configuration of synchronous video and depth capture devices disposed at fixed positions at a real-world event. In real time, the video and depth capture devices capture two-dimensional video data and depth data for surfaces of objects at the real-world event. The system generates a real-time volumetric data stream representative of a dynamic volumetric model of the surfaces of the objects at the real-world event in real time based on the captured two-dimensional video data and captured depth data. The dynamic volumetric model of the surfaces of the objects at the real-world event is configured to be used to generate virtual reality media content representative of the real-world event as experienced from a dynamically selectable viewpoint corresponding to an arbitrary location at the real-world event and selected by a user experiencing the real-world event using a media player device.
    Type: Application
    Filed: April 28, 2016
    Publication date: November 2, 2017
    Inventors: Mohammad Raheel Khalid, Ali Jaafar, Denny Breitenfeld, Xavier Hansen, Christian Egeler, Syed Kamal, Lama Hewage Ravi Prathapa Chandrasiri, Steven L. Smith
  • Patent number: D812144
    Type: Grant
    Filed: January 4, 2017
    Date of Patent: March 6, 2018
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Lama Hewage Ravi Prathapa Chandrasiri, Syed Meeran Kamal, Jonathan A. Globerson, Mohammad Raheel Khalid