Patents by Inventor Ajit Ninan
Ajit Ninan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11962819Abstract: First foviated images are streamed to a streaming client. The first foviated images with first image metadata sets are used to generate first display mapped images for rendering to a viewer at first time points. View direction data is collected and used to determine a second view direction of the viewer at a second time point. A second foviated image and a second image metadata set are generated from a second HDR source image in reference to the second view direction of the viewer and used to generate a second display mapped image for rendering to the viewer at the second time point. The second image metadata set comprises a display management metadata portions for adapting a focal-vision and peripheral-vision image portions to corresponding image portions in the second display mapped image. The focal-vision display management metadata portion is generated with a predicted light adaptation level of the viewer for the second time point.Type: GrantFiled: April 6, 2022Date of Patent: April 16, 2024Assignee: Dolby Laboratories Licensing CorporationInventor: Ajit Ninan
-
Publication number: 20240098446Abstract: Images are acquired through image sensors operating in conjunction with a media consumption system. The acquired images are used to determine a user's movement in a plurality of degrees of freedom. Sound images depicted in spatial audio rendered by audio speakers operating in conjunction with the media consumption system are adapted based at least in part on the user's movement in the plurality of degrees of freedom.Type: ApplicationFiled: November 27, 2023Publication date: March 21, 2024Applicant: Dolby Laboratories Licensing CorporationInventors: Ajit NINAN, William Anthony ROZZI
-
Patent number: 11893700Abstract: Spatial information that describes spatial locations of visual objects as in a three-dimensional (3D) image space as represented in one or more multi-view unlayered images is accessed. Based on the spatial information, a cinema image layer and one or more device image layers are generated from the one or more multi-view unlayered images. A multi-layer multi-view video signal comprising the cinema image layer and the device image layers is sent to downstream devices for rendering.Type: GrantFiled: April 28, 2022Date of Patent: February 6, 2024Assignee: Dolby Laboratories Licensing CorporationInventors: Ajit Ninan, Neil Mammen, Tyrome Y. Brown
-
Patent number: 11882267Abstract: A spatial direction of a wearable device that represents an actual viewing direction of the wearable device is determined. The spatial direction of the wearable device is used to select, from a multi-view image comprising single-view images, a set of single-view images. A display image is caused to be rendered on a device display of the wearable device. The display image represents a single-view image as viewed from the actual viewing direction of the wearable device. The display image is constructed based on the spatial direction of the wearable device and the set of single-view images.Type: GrantFiled: April 10, 2018Date of Patent: January 23, 2024Assignee: Dolby Laboratories Licensing CorporationInventors: Ajit Ninan, Neil Mammen
-
Patent number: 11849104Abstract: A device and method for video rendering. The device includes a memory and an electronic processor. The electronic processor is configured to receive, from a source device, video data including multiple reference viewpoints, determine a target image plane corresponding to a target viewpoint, determine, within the target image plane, one or more target image regions, and determine, for each target image region, a proxy image region larger than the corresponding target image region. The electronic processor is configured to determine, for each target image region, a plurality of reference pixels that fit within the corresponding proxy image region, project, for each target image region, the plurality of reference pixels that fit within the corresponding proxy image region to the target image region, producing a rendered target region from each target image region, and composite one or more of the rendered target regions to create video rendering.Type: GrantFiled: June 27, 2022Date of Patent: December 19, 2023Assignee: Dolby Laboratories Licensing CorporationInventors: Haricharan Lakshman, Wenhui Jia, Jasper Chao, Shwetha Ram, Domagoj Baricevic, Ajit Ninan
-
Publication number: 20230305310Abstract: A computing device comprises a device image display outputting device display light; an optical configuration for a viewer of the computing device to view external display images rendered with external display light from an external image display and device display images rendered with the device display light; a display light combiner to combine the external display light and the device display light to reach the viewer's vision field. The external display light and the device display light are of different light properties. The display light combiner selectively reflects the device display light toward the viewer's vision field and selectively transmits the external display light toward the viewer's vision field.Type: ApplicationFiled: August 4, 2021Publication date: September 28, 2023Applicant: Dolby Laboratories Licensing CorporationInventors: Ajit NINAN, Titus Marc DEVINE, Chun Chi WAN
-
Publication number: 20230300346Abstract: A non-random-access video stream is received. A first image block is encoded after second image blocks according to a non-random-access processing order. View direction data is received to indicate a viewer's view direction coinciding with a location covered by the first image block. The first image block is encoded into the random-access video stream before the second image blocks in a random-access processing order. The random-access video stream is delivered to a recipient decoding device operated by the viewer to cause the first image block to be processed and rendered before the second image blocks according to the random-access processing order.Type: ApplicationFiled: August 2, 2021Publication date: September 21, 2023Applicant: Dolby Laboratories Licensing CorporationInventors: Chaitanya ATLURU, Ajit NINAN
-
Publication number: 20230300426Abstract: A multi-view image stream encoded with primary and secondary image is accessed. Each primary image stream comprises groups of pictures (GOPs). Each secondary image stream comprises I-frames generated from a corresponding primary image stream. Viewpoint data collected in real time is received from a recipient decoding device to indicate that the viewer's viewpoint has changed from a specific time point. A camera is selected based on the viewer's changed viewpoint. It is determined whether the specific time point corresponds to a non-I-frame in a GOP of a primary image stream of the selected camera. If so, an I-frame from a secondary image stream corresponding to the primary image stream is transmitted to the recipient decoding device.Type: ApplicationFiled: August 3, 2021Publication date: September 21, 2023Applicant: Dolby Laboratories Licensing CorporationInventors: Chaitanya ATLURU, Ajit NINAN
-
Patent number: 11762209Abstract: A wearable device for augmented media content experiences can be formed with a mountable physical structure that has removably mountable positions and component devices that are removably mounted through the removably mountable positions. The component devices can be specifically selected based on a specific type of content consumption environment in which the wearable device is to operate. The mountable physical structure may be subject to a device washing process to which the component devices are not subject to, after the wearable device including the mountable physical structure and the component devices is used by a viewer in a content consumption session in the specific type of content consumption environment, so long as the component devices are subsequently removed from the mountable physical structure.Type: GrantFiled: August 8, 2022Date of Patent: September 19, 2023Assignee: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Ajit Ninan, Neil Mammen
-
Publication number: 20230281860Abstract: At a first time point, a first light capturing device at a first spatial location in a three-dimensional (3D) space captures first light rays from light sources located at designated spatial locations on a viewer device in the 3D space. At the first time point, a second light capturing device at a second spatial location in the 3D space captures second light rays from the light sources located at the designated spatial locations on the viewer device in the 3D space. Based on the first light rays captured by the first light capturing device and the second light rays captured by the second light capturing device, at least one of a spatial position and a spatial direction, at the first time point, of the viewer device is determined.Type: ApplicationFiled: May 9, 2023Publication date: September 7, 2023Applicant: Dolby Laboratories Licensing CorporationInventors: Ajit NINAN, Neil MAMMEN
-
Publication number: 20230283976Abstract: Images of an actual rendering environment are acquired through image sensors operating in conjunction with a media consumption system. The acquired images of the actual rendering environment are used to predict audio characteristics of objects present in the actual rendering environment. Spatial audio rendered, to a user in the actual rendering environment, by audio speakers operating in conjunction with the media consumption system is adjusted or modified based at least in part on the audio characteristics of the objects present in the actual rendering environment.Type: ApplicationFiled: January 30, 2023Publication date: September 7, 2023Applicant: Dolby Laboratories Licensing CorporationInventors: Ajit NINAN, William Anthony ROZZI
-
Publication number: 20230254660Abstract: Images of a user’s head are acquired at a plurality of different orientational angles through image sensors operating in conjunction with a media consumption system. The acquired images of the user’s head are used to select or predict a specific personalized head related transfer function for the user. Spatial audio rendered by audio speakers operating in conjunction with the media consumption system is adjusted or modified based at least in part on the specific personalized HRTF selected for the user.Type: ApplicationFiled: February 1, 2023Publication date: August 10, 2023Applicant: Dolby Laboratories Licensing CorporationInventors: Ajit NINAN, William Anthony ROZZI
-
Patent number: 11706403Abstract: Based on viewing tracking data, a viewer's view direction to a three-dimensional (3D) scene depicted by a first video image is determined. The first video image has been streamed in a video stream to the streaming client device before the first time point and rendered with the streaming client device to the viewer at the first time point. Based on the viewer's view direction, a target view portion is identified in a second video image to be streamed in the video stream to the streaming client device to be rendered at a second time point subsequent to the first time point. The target view portion is encoded into the video stream with a higher target spatiotemporal resolution than that used to encode remaining non-target view portions in the second video image.Type: GrantFiled: December 10, 2020Date of Patent: July 18, 2023Assignee: Dolby Laboratories Licensing CorporationInventor: Ajit Ninan
-
Publication number: 20230215129Abstract: Saliency regions are identified in a global scene depicted by volumetric video. Saliency video streams that track the saliency regions are generated. Each saliency video stream tracks a respective saliency region. A saliency stream based representation of the volumetric video is generated to include the saliency video streams. The saliency stream based representation of the volumetric video is transmitted to a video streaming client.Type: ApplicationFiled: June 16, 2021Publication date: July 6, 2023Applicant: Dolby Laboratories Licensing CorporationInventors: Ajit NINAN, Shwetha RAM, Gregory John WARD, Domagoj BARICEVIC, Vijay KAMARSHI
-
Patent number: 11694353Abstract: While a viewer is viewing a first stereoscopic image comprising a first left image and a first right image, a left vergence angle of a left eye of a viewer and a right vergence angle of a right eye of the viewer are determined. A virtual object depth is determined based at least in part on (i) the left vergence angle of the left eye of the viewer and (ii) the right vergence angle of the right eye of the viewer. A second stereoscopic image comprising a second left image and a second right image for the viewer is rendered on one or more image displays. The second stereoscopic image is subsequent to the first stereoscopic image. The second stereoscopic image is projected from the one or more image displays to a virtual object plane at the virtual object depth.Type: GrantFiled: March 3, 2021Date of Patent: July 4, 2023Assignee: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Ajit Ninan, Chun Chi Wan
-
Patent number: 11676297Abstract: While a viewer is viewing a first stereoscopic image comprising a first left image and a first right image, a left vergence angle of a left eye of a viewer and a right vergence angle of a right eye of the viewer are determined. A virtual object depth is determined based at least in part on (i) the left vergence angle of the left eye of the viewer and (ii) the right vergence angle of the right eye of the viewer. A second stereoscopic image comprising a second left image and a second right image for the viewer is rendered on one or more image displays. The second stereoscopic image is subsequent to the first stereoscopic image. The second stereoscopic image is projected from the one or more image displays to a virtual object plane at the virtual object depth.Type: GrantFiled: March 3, 2021Date of Patent: June 13, 2023Assignee: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Ajit Ninan, Chun Chi Wan
-
Patent number: 11670039Abstract: Bordering pixels delineating a texture hole region are identified in a target image. Depth values of the bordering pixels are automatically clustered into two depth value clusters. A specific estimation direction is selected from multiple candidate estimation directions for a texture hole pixel in a texture hole region. A depth value of the texture hole pixel is estimated by interpolating depth values of two bordering background pixels in the specific estimation direction. The estimated depth value is used to warp the texture hole pixel into a reference view represented by a temporal reference image. A pixel value of the texture hole pixel is predicted based on a reference pixel value of a reference pixel from the reference image to which the texture hole pixel is warped using the estimated depth value.Type: GrantFiled: March 4, 2020Date of Patent: June 6, 2023Assignee: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Wenhui Jia, Haricharan Lakshman, Ajit Ninan
-
Patent number: 11669991Abstract: At a first time point, a first light capturing device at a first spatial location in a three-dimensional (3D) space captures first light rays from light sources located at designated spatial locations on a viewer device in the 3D space. At the first time point, a second light capturing device at a second spatial location in the 3D space captures second light rays from the light sources located at the designated spatial locations on the viewer device in the 3D space. Based on the first light rays captured by the first light capturing device and the second light rays captured by the second light capturing device, at least one of a spatial position and a spatial direction, at the first time point, of the viewer device is determined.Type: GrantFiled: June 19, 2020Date of Patent: June 6, 2023Assignee: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Ajit Ninan, Neil Mammen
-
Patent number: 11653065Abstract: Scenes in video images are identified based on image content of the video images. Regional cross sections of the video images are determined based on the scenes in the video images. Image portions of the video images in the regional cross sections are encoded into multiple video sub-streams at multiple different spatiotemporal resolutions. An overall video stream that includes the multiple video sub-streams is transmitted to a streaming client device.Type: GrantFiled: March 15, 2022Date of Patent: May 16, 2023Assignee: Dolby Laboratories Licensing CorporationInventors: Chaitanya Atluru, Ajit Ninan
-
Publication number: 20230034477Abstract: A wearable device for augmented media content experiences can be formed with a mountable physical structure that has removably mountable positions and component devices that are removably mounted through the removably mountable positions. The component devices can be specifically selected based on a specific type of content consumption environment in which the wearable device is to operate. The mountable physical structure may be subject to a device washing process to which the component devices are not subject to, after the wearable device including the mountable physical structure and the component devices is used by a viewer in a content consumption session in the specific type of content consumption environment, so long as the component devices are subsequently removed from the mountable physical structure.Type: ApplicationFiled: August 8, 2022Publication date: February 2, 2023Applicant: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Ajit NINAN, Neil MAMMEN