Patents by Inventor Ivana Girdzijauskas
Ivana Girdzijauskas has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9787980Abstract: An auxiliary information map (10) is upsampled to form an upsampled auxiliary information map (20). Multiple reference pixels (23) in the upsampled auxiliary information map (20) are selected for a current pixel (21) in the upsampled auxiliary information map (20) based on texel values of texels in an associated texture (30). An updated pixel value is calculated for the current pixel (21) based on the pixel values of the selected reference pixels (23).Type: GrantFiled: June 29, 2012Date of Patent: October 10, 2017Assignee: Telefonaktiebolaget LM Ericsson (publ)Inventors: Ivana Girdzijauskas, Per Fröjdh, Thomas Rusert
-
Patent number: 9729847Abstract: The disclosed embodiments relate to determining transmit formats and receive formats for a 3D video communication service. Sets of available 3D video communication transmit formats are received. Each set is associated with a client device. Sets of available 3D video communication receive formats are received. Each set is associated with one of the client devices. One format for transmission of 3D video communication from the at least one client device to the other client devices is determined for at least one of the client devices. The one format is a member of both the set of available 3D video communication transmit formats associated with the at least one client device and the received available 3D video communication receive formats associated with other client devices.Type: GrantFiled: August 8, 2012Date of Patent: August 8, 2017Assignee: Telefonaktiebolaget LM Ericsson (publ)Inventors: Beatriz Grafulla-González, Ivana Girdzijauskas
-
Patent number: 9525858Abstract: Method and arrangement for increasing the resolution of a depth or disparity map related to multi view video. The method comprises deriving a high resolution depth map based on a low resolution depth map and a masked texture image edge map. The masked texture image edge map comprises information on edges in a high resolution texture image, which edges have a correspondence in the low resolution depth map. The texture image and the depth map are associated with the same frame.Type: GrantFiled: May 21, 2012Date of Patent: December 20, 2016Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Sebastian Schwarz, Ivana Girdzijauskas, Roger Olsson, Mårten Sjöström
-
Patent number: 9451233Abstract: The embodiments of the present invention relates to a method and a processor for representing a 3D scene. In the method, one 3D component of the 3D scene to be represented, captured at least three different views (v1, v2, v3) is projecting to a predefined view (vF). A value associated with each projected view regarding the 3D component is then determined and consistency among the projected views regarding the 3D component is detected. Moreover, a consistency value regarding the 3D component is determined based on the determined values associated with the respective projected view, and the determined values are replaced by the determined consistency value on at least one of the three projected 3D components.Type: GrantFiled: November 24, 2010Date of Patent: September 20, 2016Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Ivana Girdzijauskas, Markus Flierl, Apostolos Georgakis, Pravin Kumar Rana, Thomas Rusert
-
Patent number: 9235920Abstract: The present invention relates to three dimension (3D) scene representations and in particular to a method and a processor for providing improved 3D scene representations. An objective of the embodiments of the present invention is to improve the determination of consistency among a plurality of projections at a virtual view denoted vF. When determining the consistency, entries of a distance matrix, indicative of distance differences of 3D components between different views for a corresponding segment k when projected to the predefined view (vF) for each segment k, are compared with entries of a threshold matrix. The objective is achieved by assigning each segment k of a 3D component to a cluster based on individual rules for each cluster and one threshold matrix and by determining one threshold matrix for each cluster based on the segments of that cluster.Type: GrantFiled: March 7, 2012Date of Patent: January 12, 2016Assignee: Telefonaktiebolaget L M Ericsson (publ)Inventors: Ivana Girdzijauskas, Markus H. Flierl, Pravin Kumar Rana
-
Publication number: 20150294473Abstract: A method, an electronic device, a computer program and a computer program product relate to 3D image reconstruction. A depth image part (7) of a 3D image representation is acquired. The depth image part represents depth values of the 3D image. An area (9, 10) in the depth image part is determined. The area represents missing depth values in the depth image part. At least one first line (Pr) in a first neighbourhood (Nr) of the area is estimated by a first gradient of the depth values being determined in the first neighbourhood and a direction of the at least one first line being determined in accordance with the first gradient. Depth values of the area based on the at least one first line are estimated and the area is filled with the estimated depth values. The 3D image is thereby reconstructed.Type: ApplicationFiled: November 12, 2012Publication date: October 15, 2015Inventors: Julien Michot, Ivana Girdzijauskas
-
Publication number: 20150271567Abstract: There is provided a 3D video warning module comprising an input, a processor, and an output. The input is for receiving: capture information from a 3D capture device, and display information from at least one 3D display device, wherein the 3D display device is for displaying 3D video captured by the 3D capture device. The processor is for analyzing the capture information and the display information, the processor arranged to identify at least one issue (an incompatibility). The output is for sending a notification of the issue to at least one of the 3D capture device and the 3D display device.Type: ApplicationFiled: October 29, 2012Publication date: September 24, 2015Applicant: TELEFONAKTIEBOLAGET L M ERICSSON (PUBL)Inventors: Julien Michot, Ivana Girdzijauskas, Thomas Rusert
-
Publication number: 20150163476Abstract: The disclosed embodiments relate to determining transmit formats and receive formats for a 3D video communication service. Sets of available 3D video communication transmit formats are received. Each set is associated with a client device. Sets of available 3D video communication receive formats are received. Each set is associated with one of the client devices. One format for transmission of 3D video communication from the at least one client device to the other client devices is determined for at least one of the client devices. The one format is a member of both the set of available 3D video communication transmit formats associated with the at least one client device and the received available 3D video communication receive formats associated with other client devices.Type: ApplicationFiled: August 8, 2012Publication date: June 11, 2015Applicant: Telefonaktiebolaget L M Ericsson (publ)Inventors: Beatriz Grafulla-González, Ivana Girdzijauskas
-
Publication number: 20140375630Abstract: The present invention relates to three dimension (3D) scene representations and in particular to a method and a processor for providing improved 3D scene representations. An objective of the embodiments of the present invention is to improve the determination of consistency among a plurality of projections at a virtual view denoted ?F. When determining the consistency, entries of a distance matrix, indicative of distance differences of 3D components between different views for a corresponding segment k when projected to the predefined view (?F) for each segment k, are compared with entries of a threshold matrix. The objective is achieved by assigning each segment k of a 3D component to a cluster based on individual rules for each cluster and one threshold matrix and by determining one threshold matrix for each cluster based on the segments of that cluster.Type: ApplicationFiled: March 7, 2012Publication date: December 25, 2014Applicant: Telefonaktiebolaget L M Ericsson (publ)Inventors: Ivana Girdzijauskas, Markus H. Flierl, Pravin Kumar Rana
-
Publication number: 20140218490Abstract: There is provided a video apparatus having a stereoscopic display associated therewith, the video apparatus arranged to: receive at least one image and at least one reference parameter associated with said image; calculate a baseline distance for synthesizing a view, the calculation based upon the received at least one reference parameter and at least one parameter of the stereoscopic display; synthesize at least one view using the baseline distance and the received at least one image; and send the received at least one image and the synthesized at least one image to the stereoscopic display for display.Type: ApplicationFiled: November 11, 2011Publication date: August 7, 2014Applicant: Telefonaktiebolaget L M Ericsson (pulb)Inventors: Andrey Norkin, Ivana Girdzijauskas
-
Publication number: 20140205023Abstract: An auxiliary information map (10) is upsampled to form an upsampled auxiliary information map (20). Multiple reference pixels (23) in the upsampled auxiliary information map (20) are selected for a current pixel (21) in the upsampled auxiliary information map (20) based on texel values of texels in an associated texture (30). An updated pixel value is calculated for the current pixel (21) based on the pixel values of the selected reference pixels (23).Type: ApplicationFiled: June 29, 2012Publication date: July 24, 2014Applicant: TELEFONAKTIEBOLAGET L M ERICSSON (PUBL)Inventors: Ivana Girdzijauskas, Per Fröjdh, Thomas Rusert
-
Patent number: 8780172Abstract: Co-processing of a video frame (32) and its associated depth map (34) suitable for free viewpoint television involves detecting respective edges (70, 71, 80, 81) in the video frame (32) and the depth map (34). The edges (70, 71, 80, 81) are aligned and used to identify any pixels (90-92) in the depth map (34) or the video frame (32) having incorrect depth values or color values based on the positions of the pixels in the depth map (34)or the video frame (32) relative an edge (80) in 5 the depth map (34) and a corresponding, aligned edge (70) in the video frame (32). The depth values or color values of the identified pixels (90-92) can then be corrected in order to improve the accuracy of the depth map (32) or video frame (34).Type: GrantFiled: May 7, 2009Date of Patent: July 15, 2014Assignee: Telefonaktiebolaget L M Ericsson (Publ)Inventors: Ivana Girdzijauskas, Per Frojdh, Clinton Priddle
-
Publication number: 20140146139Abstract: Method and arrangement for increasing the resolution of a depth or disparity map related to multi view video. The method comprises deriving a high resolution depth map based on a low resolution depth map and a masked texture image edge map. The masked texture image edge map comprises information on edges in a high resolution texture image, which edges have a correspondence in the low resolution depth map. The texture image and the depth map are associated with the same frame.Type: ApplicationFiled: May 21, 2012Publication date: May 29, 2014Applicant: TELEFONAKTIEBOLAGET L M ERICSSON (PUBL)Inventors: Sebastian Schwartz, Ivana Girdzijauskas, Roger Olsson, Mårten Sjöström
-
Publication number: 20130027523Abstract: The embodiments of the present invention relates to a method and a processor for representing a 3D scene. In the method, one 3D component of the 3D scene to be represented, captured at least three different views (v1, v2, v3) is projecting to a predefined view (vF). A value associated with each projected view regarding the 3D component is then determined and consistency among the projected views regarding the 3D component is detected. Moreover, a consistency value regarding the 3D component is determined based on the determined values associated with the respective projected view, and the determined values are replaced by the determined consistency value on at least one of the three projected 3D components.Type: ApplicationFiled: November 24, 2010Publication date: January 31, 2013Applicant: Telefonaktiebolaget L M Ericsson (PUBL)Inventors: Ivana Girdzijauskas, Markus Flierl, Apostolos Georgakis, Pravin Kumar Rana, Thomas Rusert
-
Publication number: 20110285813Abstract: Co-processing of a video frame (32) and its associated depth map (34) suitable for free viewpoint television involves detecting respective edges (70, 71, 80, 81) in the video frame (32) and the depth map (34). The edges (70, 71, 80, 81) are aligned and used to identify any pixels (90-92) in the depth map (34) or the video frame (32) having incorrect depth values or color values based on the positions of the pixels in the depth map (34)or the video frame (32) relative an edge (80) in 5 the depth map (34) and a corresponding, aligned edge (70) in the video frame (32). The depth values or color values of the identified pixels (90-92) can then be corrected in order to improve the accuracy of the depth map (32) or video frame (34).Type: ApplicationFiled: May 7, 2009Publication date: November 24, 2011Applicant: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Ivana Girdzijauskas, Per Frojdh, Clinton Priddle