Patents by Inventor Ajit Ninan
Ajit Ninan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20210329316Abstract: First foviated images are streamed to a streaming client. The first foviated images with first image metadata sets are used to generate first display mapped images for rendering to a viewer at first time points. View direction data is collected and used to determine a second view direction of the viewer at a second time point. A second foviated image and a second image metadata set are generated from a second HDR source image in reference to the second view direction of the viewer and used to generate a second display mapped image for rendering to the viewer at the second time point. The second image metadata set comprises a display management metadata portions for adapting a focal-vision and peripheral-vision image portions to corresponding image portions in the second display mapped image. The focal-vision display management metadata portion is generated with a predicted light adaptation level of the viewer for the second time point.Type: ApplicationFiled: July 15, 2019Publication date: October 21, 2021Applicant: Dolby Laboratories Licensing CorporationInventor: Ajit NINAN
-
Publication number: 20210325684Abstract: An eyewear device comprises a left lens assembly and a right lens assembly. The left lens assembly includes a left focus tunable lens and a left focus fixed lens. A right lens assembly includes a right focus tunable lens and a right focus fixed lens. The eyewear device may be used in 3D display applications, virtual reality applications, augmented reality applications, remote presence applications, etc. The eyewear device may also be used as vision correction glasses.Type: ApplicationFiled: February 22, 2021Publication date: October 21, 2021Applicant: Dolby Laboratories Licensing CorporationInventors: Ajit NINAN, Chaitanya ATLURU, James Thomas TRIPLETT, Chun Chi WAN
-
Publication number: 20210319623Abstract: Spatial information that describes spatial locations of visual objects as in a three-dimensional (3D) image space as represented in one or more multi-view unlayered images is accessed. Based on the spatial information, a cinema image layer and one or more device image layers are generated from the one or more multi-view unlayered images. A multi-layer multi-view video signal comprising the cinema image layer and the device image layers is sent to downstream devices for rendering.Type: ApplicationFiled: April 26, 2021Publication date: October 14, 2021Applicant: Dolby Laboratories Licensing CorporationInventors: Ajit NINAN, Neil MAMMEN, Tyrome Y. BROWN
-
Publication number: 20210314670Abstract: Scenes in video images are identified based on image content of the video images. Regional cross sections of the video images are determined based on the scenes in the video images. Image portions of the video images in the regional cross sections are encoded into multiple video sub-streams at multiple different spatiotemporal resolutions. An overall video stream that includes the multiple video sub-streams is transmitted to a streaming client device.Type: ApplicationFiled: September 18, 2017Publication date: October 7, 2021Applicant: Dolby Laboratories Licensing CorporationInventors: Chaitanya ATLURU, Ajit NINAN
-
Publication number: 20210264631Abstract: While a viewer is viewing a first stereoscopic image comprising a first left image and a first right image, a left vergence angle of a left eye of a viewer and a right vergence angle of a right eye of the viewer are determined. A virtual object depth is determined based at least in part on (i) the left vergence angle of the left eye of the viewer and (ii) the right vergence angle of the right eye of the viewer. A second stereoscopic image comprising a second left image and a second right image for the viewer is rendered on one or more image displays. The second stereoscopic image is subsequent to the first stereoscopic image. The second stereoscopic image is projected from the one or more image displays to a virtual object plane at the virtual object depth.Type: ApplicationFiled: March 3, 2021Publication date: August 26, 2021Applicant: Dolby Laboratories Licensing CorporationInventors: Ajit NINAN, Chun Chi WAN
-
Patent number: 11074875Abstract: Techniques for driving a dual modulation display include generating backlight drive signals to drive individually-controllable illumination sources. The illumination sources emit first light onto a light conversion layer. The light conversion layer converts the first light into second light. The light conversion layer can include quantum dots or phosphor materials. Modulation drive signals are generated to determine transmission of the second light through individual subpixels of the display. These modulation drive signals can be adjusted based on one or more light field simulations. The light field simulations can include: (i) a color shift for a pixel based on a point spread function of the illumination sources; (ii) binning difference of individual illumination sources; (iii) temperature dependence of display components on performance; or (iv) combinations thereof.Type: GrantFiled: April 30, 2020Date of Patent: July 27, 2021Assignee: Dolby Laboratories Licensing CorporationInventors: Ajit Ninan, Chun Chi Wan
-
Publication number: 20210195212Abstract: Video images are rendered in viewports of content viewers. Each content viewer views video images through a respective viewport in the viewports. Spatial locations in the video images to which foveal visions of the content viewers are directed are determined. ROIs in the video images are identified based on the spatial locations in the video images.Type: ApplicationFiled: March 10, 2021Publication date: June 24, 2021Inventors: Chaitanya ATLURU, Ajit NINAN
-
Patent number: 11036055Abstract: A wearable device for augmented media content experiences can be formed with a mountable physical structure that has removably mountable positions and component devices that are removably mounted through the removably mountable positions. The component devices can be specifically selected based on a specific type of content consumption environment in which the wearable device is to operate. The mountable physical structure may be subject to a device washing process to which the component devices are not subject to, after the wearable device including the mountable physical structure and the component devices is used by a viewer in a content consumption session in the specific type of content consumption environment, so long as the component devices are subsequently removed from the mountable physical structure.Type: GrantFiled: September 6, 2018Date of Patent: June 15, 2021Assignee: Dolby Laboratories Licensing CorporationInventors: Ajit Ninan, Neil Mammen
-
Patent number: 10991164Abstract: Spatial information that describes spatial locations of visual objects as in a three-dimensional (3D) image space as represented in one or more multi-view unlayered images is accessed. Based on the spatial information, a cinema image layer and one or more device image layers are generated from the one or more multi-view unlayered images. A multi-layer multi-view video signal comprising the cinema image layer and the device image layers is sent to downstream devices for rendering.Type: GrantFiled: April 10, 2018Date of Patent: April 27, 2021Assignee: Dolby Laboratories Licensing CorporationInventors: Ajit Ninan, Neil Mammen, Tyrome Y. Brown
-
Patent number: 10992936Abstract: Techniques are provided to encode and decode image data comprising a tone mapped (TM) image with HDR reconstruction data in the form of luminance ratios and color residual values. In an example embodiment, luminance ratio values and residual values in color channels of a color space are generated on an individual pixel basis based on a high dynamic range (HDR) image and a derivative tone-mapped (TM) image that comprises one or more color alterations that would not be recoverable from the TM image with a luminance ratio image. The TM image with HDR reconstruction data derived from the luminance ratio values and the color-channel residual values may be outputted in an image file to a downstream device, for example, for decoding, rendering, and/or storing. The image file may be decoded to generate a restored HDR image free of the color alterations.Type: GrantFiled: November 11, 2019Date of Patent: April 27, 2021Assignee: Dolby Laboratories Licensing CorporationInventors: Wenhui Jia, Ajit Ninan, Arkady Ten, Gregory John Ward
-
Patent number: 10979721Abstract: Video images are rendered in viewports of content viewers. Each content viewer views video images through a respective viewport in the viewports. Spatial locations in the video images to which foveal visions of the content viewers are directed are determined. ROIs in the video images are identified based on the spatial locations in the video images.Type: GrantFiled: November 17, 2017Date of Patent: April 13, 2021Assignee: Dolby Laboratories Licensing CorporationInventors: Chaitanya Atluru, Ajit Ninan
-
Publication number: 20210099693Abstract: Based on viewing tracking data, a viewer's view direction to a three-dimensional (3D) scene depicted by a first video image is determined. The first video image has been streamed in a video stream to the streaming client device before the first time point and rendered with the streaming client device to the viewer at the first time point. Based on the viewer's view direction, a target view portion is identified in a second video image to be streamed in the video stream to the streaming client device to be rendered at a second time point subsequent to the first time point. The target view portion is encoded into the video stream with a higher target spatiotemporal resolution than that used to encode remaining non-target view portions in the second video image.Type: ApplicationFiled: December 10, 2020Publication date: April 1, 2021Applicant: Dolby Laboratories Licensing CorporationInventor: Ajit Ninan
-
Patent number: 10943359Abstract: While a viewer is viewing a first stereoscopic image comprising a first left image and a first right image, a left vergence angle of a left eye of a viewer and a right vergence angle of a right eye of the viewer are determined. A virtual object depth is determined based at least in part on (i) the left vergence angle of the left eye of the viewer and (ii) the right vergence angle of the right eye of the viewer. A second stereoscopic image comprising a second left image and a second right image for the viewer is rendered on one or more image displays. The second stereoscopic image is subsequent to the first stereoscopic image. The second stereoscopic image is projected from the one or more image displays to a virtual object plane at the virtual object depth.Type: GrantFiled: August 3, 2017Date of Patent: March 9, 2021Assignee: Dolby Laboratories Licensing CorporationInventors: Ajit Ninan, Chun Chi Wan
-
Patent number: 10928638Abstract: An eyewear device comprises a left lens assembly and a right lens assembly. The left lens assembly includes a left focus tunable lens and a left focus fixed lens. A right lens assembly includes a right focus tunable lens and a right focus fixed lens. The eyewear device may be used in 3D display applications, virtual reality applications, augmented reality applications, remote presence applications, etc. The eyewear device may also be used as vision correction glasses.Type: GrantFiled: October 30, 2017Date of Patent: February 23, 2021Assignee: Dolby Laboratories Licensing CorporationInventors: Ajit Ninan, Chaitanya Atluru, James Thomas Triplett, Chun Chi Wan
-
Patent number: 10893261Abstract: Based on viewing tracking data, a viewer's view direction to a three-dimensional (3D) scene depicted by a first video image is determined. The first video image has been streamed in a video stream to the streaming client device before the first time point and rendered with the streaming client device to the viewer at the first time point. Based on the viewer's view direction, a target view portion is identified in a second video image to be streamed in the video stream to the streaming client device to be rendered at a second time point subsequent to the first time point. The target view portion is encoded into the video stream with a higher target spatiotemporal resolution than that used to encode remaining non-target view portions in the second video image.Type: GrantFiled: December 4, 2018Date of Patent: January 12, 2021Assignee: Dolby Laboratories Licensing CorporationInventor: Ajit Ninan
-
Publication number: 20200388077Abstract: Spatial information that describes spatial locations of visual objects as in a three-dimensional (3D) image space as represented in one or more multi-view unlayered images is accessed. Based on the spatial information, a cinema image layer and one or more device image layers are generated from the one or more multi-view unlayered images. A multi-layer multi-view video signal comprising the cinema image layer and the device image layers is sent to downstream devices for rendering.Type: ApplicationFiled: April 10, 2018Publication date: December 10, 2020Applicant: Dolby Laboratories Licensing CorporationInventors: Ajit NINAN, Neil MAMMEN, Tyrome Y. BROWN
-
Publication number: 20200372605Abstract: Peripheral-vision expanded images are streamed to a video streaming client. The peripheral-vision expanded images are generated from source images in reference to view directions of the viewer at respective time points. View direction data is collected and received in real time while the viewer is viewing display images derived from the peripheral-vision expanded images. A second peripheral-vision expanded image is generated from a second source image in reference to a second view direction of the viewer at a second time point. The second peripheral-vision expanded image has a focal-vision image portion covering the second view direction of the viewer and a peripheral-vision image portion outside the focal-vision image portion. The second peripheral-vision expanded image is transmitted to the video streaming client.Type: ApplicationFiled: August 10, 2020Publication date: November 26, 2020Inventors: Alexandre Chapiro, Chaitanya Atluru, Chun Chi Wan, Haricharan Lakshman, William Rozzi, Shane Ruggieri, Ajit Ninan
-
Publication number: 20200320734Abstract: At a first time point, a first light capturing device at a first spatial location in a three-dimensional (3D) space captures first light rays from light sources located at designated spatial locations on a viewer device in the 3D space. At the first time point, a second light capturing device at a second spatial location in the 3D space captures second light rays from the light sources located at the designated spatial locations on the viewer device in the 3D space. Based on the first light rays captured by the first light capturing device and the second light rays captured by the second light capturing device, at least one of a spatial position and a spatial direction, at the first time point, of the viewer device is determined.Type: ApplicationFiled: June 19, 2020Publication date: October 8, 2020Applicant: Dolby Laboratories Licensing CorporationInventors: Ajit NINAN, Neil MAMMEN
-
Publication number: 20200288114Abstract: A device and method for video rendering. The device includes a memory and an electronic processor. The electronic processor is configured to receive, from a source device, video data including multiple reference viewpoints, determine a target image plane corresponding to a target viewpoint, determine, within the target image plane, one or more target image regions, and determine, for each target image region, a proxy image region larger than the corresponding target image region. The electronic processor is configured to determine, for each target image region, a plurality of reference pixels that fit within the corresponding proxy image region, project, for each target image region, the plurality of reference pixels that fit within the corresponding proxy image region to the target image region, producing a rendered target region from each target image region, and composite one or more of the rendered target regions to create video rendering.Type: ApplicationFiled: March 4, 2020Publication date: September 10, 2020Applicant: Dolby Laboratories Licensing CorporationInventors: Haricharan LAKSHMAN, Wenhui JIA, Jasper CHAO, Shwetha RAM, Domagoj BARICEVIC, Ajit NINAN
-
Publication number: 20200286293Abstract: Bordering pixels delineating a texture hole region are identified in a target image. Depth values of the bordering pixels are automatically clustered into two depth value clusters. A specific estimation direction is selected from multiple candidate estimation directions for a texture hole pixel in a texture hole region. A depth value of the texture hole pixel is estimated by interpolating depth values of two bordering background pixels in the specific estimation direction. The estimated depth value is used to warp the texture hole pixel into a reference view represented by a temporal reference image. A pixel value of the texture hole pixel is predicted based on a reference pixel value of a reference pixel from the reference image to which the texture hole pixel is warped using the estimated depth value.Type: ApplicationFiled: March 4, 2020Publication date: September 10, 2020Applicant: Dolby Laboratories Licensing CorporationInventors: Wenhui JIA, Haricharan LAKSHMAN, Ajit NINAN