Patents Issued in November 9, 2017
  • Publication number: 20170324927
    Abstract: The Controlled Environment Media And Communication System delivers communications services to residents of controlled facilities using a common network architecture. Some of the communications capabilities of the Controlled Environment Media And Communication System include media distribution, video visitation, intra-facility messaging, and other such communications services.
    Type: Application
    Filed: May 6, 2016
    Publication date: November 9, 2017
    Inventor: Stephen L. HODGE
  • Publication number: 20170324928
    Abstract: The Controlled Environment Media And Communication System delivers communications services to residents of controlled facilities using a common network architecture. Some of the communications capabilities of the Controlled Environment Media And Communication System include media distribution, video visitation, intra-facility messaging, and other such communications services.
    Type: Application
    Filed: May 23, 2017
    Publication date: November 9, 2017
    Applicant: Global Tel*Link Corporation
    Inventor: Stephen L. HODGE
  • Publication number: 20170324929
    Abstract: The Controlled Environment Media And Communication System delivers communications services to residents of controlled facilities using a common network architecture. Some of the communications capabilities of the Controlled Environment Media And Communication System include media distribution, video visitation, intra-facility messaging, and other such communications services.
    Type: Application
    Filed: May 23, 2017
    Publication date: November 9, 2017
    Applicant: Global Tel*Link Corporation
    Inventor: Stephen L. HODGE
  • Publication number: 20170324930
    Abstract: Disclosed herein is a method for permitting a real-time virtual medical examination using a patient device and at least one diagnostic device including receiving, at the patient device, a signal transmitted from the at least one diagnostic device; generating diagnostic information based on the received signal; encrypting the diagnostic information; establishing communication over a network between the patient device and a first remote server; establishing a video conferencing session via a second remote server; and transmitting the encrypted diagnostic information to the first remote server.
    Type: Application
    Filed: July 21, 2017
    Publication date: November 9, 2017
    Inventor: Fawzi SHAYA
  • Publication number: 20170324931
    Abstract: Example embodiments disclosed herein relate to spatial congruency adjustment. A method for adjusting spatial congruency in a video conference is disclosed. The method includes detecting spatial congruency between a visual scene captured by a video endpoint device and an auditory scene captured by an audio endpoint device that is positioned in relation to the video endpoint device, the spatial congruency being a degree of alignment between the auditory scene and the visual scene, comparing the detected spatial congruency with a predefined threshold and in response to the detected spatial congruency being below the threshold, adjusting the spatial congruency. Corresponding system and computer program products are also disclosed.
    Type: Application
    Filed: November 17, 2015
    Publication date: November 9, 2017
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Xuejing SUN, Dong SHI, Shen HUANG, Kai LI, Hannes MUESCH, Glenn N. DICKINS, Gary SPITTLE
  • Publication number: 20170324932
    Abstract: A camera system for a video conference endpoint includes a fixed wide lens camera providing a view of a space, a first fixed camera providing a view of a first portion of the space, a second fixed camera providing a view of a second portion of the space, a third fixed camera providing a view of a third portion of the space, and a processor operatively coupled to each of the cameras. Each of the cameras is configured to produce a video signal and the processor is configured to receive the video signals and select a relevant video signal from the video signals. The processor is also configured to process the relevant video signal by digitally panning, tilting, and zooming of the relevant video signal to generate a video stream from the processed video signal.
    Type: Application
    Filed: July 25, 2017
    Publication date: November 9, 2017
    Inventors: Kristian Tangeland, Knut Helge Teppan, Andre Lyngra
  • Publication number: 20170324933
    Abstract: A video-enabled communication system includes a camera to acquire an image of a local participant during a video communication session and a control unit that selects a lighting configuration for the local participant to be captured by the camera for provision to a remote endpoint for display to another participant. The lighting configuration selection is based on information describing a local participant or context of the video communication session. The processor conditions a change from providing, to the remote participant endpoint for display, a first image captured under a first lighting configuration selected at a first time to a second image captured under a different lighting configuration selected at a second time upon a difference between the first and second times having at least a threshold magnitude.
    Type: Application
    Filed: May 6, 2016
    Publication date: November 9, 2017
    Applicant: Avaya Inc.
    Inventors: Amir Alrod, Tamar Barzuza
  • Publication number: 20170324934
    Abstract: Techniques for managing visual compositions for a multimedia conference call are described. An apparatus may comprise a processor to allocate a display object bit rate for multiple display objects where a total display object bit rate for all display objects is equal to or less than a total input bit rate, and decode video information from multiple video streams each having different video layers with different levels of spatial resolution, temporal resolution and quality for two or more display objects. Other embodiments are described and claimed.
    Type: Application
    Filed: March 21, 2017
    Publication date: November 9, 2017
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Warren V. Barkley, Philip A. Chou, Regis J. Crinon, Tim Moore
  • Publication number: 20170324935
    Abstract: A device may include videoconferencing circuitry. The videoconferencing circuitry may access video data of a user captured through a camera. The videoconferencing circuitry may further determine, for the captured video data, that a user portion of the user is not in a field of view of the camera, such as the hand of the user. The videoconferencing circuitry may augment the video data to include a virtual representation of the user portion of the user and transmit the video data augmented with the virtual representation of the user portion to a remote videoconferencing device.
    Type: Application
    Filed: November 26, 2014
    Publication date: November 9, 2017
    Applicant: Hewlett-Packard Development Company, L.P.
    Inventors: Robert C Brooks, Kent E Biggs, Chi So, Nam H Nguyen
  • Publication number: 20170324936
    Abstract: An Internet of Thing (IoT) device for a city includes a light source; sensors including a camera and a microphone array; a processor coupled to the light source and the sensor; and a wireless transceiver coupled to the processor.
    Type: Application
    Filed: February 17, 2017
    Publication date: November 9, 2017
    Inventor: Bao Tran
  • Publication number: 20170324937
    Abstract: An optical bonding machine is provided, including a transparent datum located within the optical bonding machine, wherein the transparent datum supports a first substrate, a robotic placement head configured to pick up a second substrate and place the second substrate into contact with the first substrate, on the transparent datum, a camera disposed proximate the transparent datum, the camera capturing a video of a flow of an optically clear adhesive between the first substrate and the second substrate, and a curing source disposed proximate the transparent datum, the curing source emitting UV rays that pass through the transparent datum and the first substrate to cure an optically clear adhesive between a bonded substrate comprising the first substrate, the optically clear adhesive, and the second substrate. An associated method is also provided.
    Type: Application
    Filed: May 3, 2017
    Publication date: November 9, 2017
    Inventors: ANDREW JOHN NALLY, ALEXANDER M. GIORDANO, EDWARD F. CAREY, JONATHAN NEAL URQUHART
  • Publication number: 20170324938
    Abstract: The present invention is directed to a system and methods of monitoring a child seated in the rear seat of a vehicle in a child's car seat employing a video camera which transmits a video signal to a video display receiver placed in the driver's frame of vision. The video camera as envisioned herein is placed within a child's stuffed toy, the camera signal being transmitted remotely to a separate video display monitor screen device, viewable to the parent driving the vehicle. The camera is adjustable in the number of positions in which it is placed in the vehicle compartment as well as the direction in which the camera is directed. To adjust the direction in which the camera is pointed, the invention includes a bendable, flexible and sturdy neck that interconnects the camera to a transmission unit, which provides the driver with a view of the child.
    Type: Application
    Filed: July 26, 2017
    Publication date: November 9, 2017
    Applicant: Baby-Tech Innovations, Inc.
    Inventor: Giuseppe Veneziano
  • Publication number: 20170324939
    Abstract: Disclosed are a self-adaptive adjustment method and device of a projector, and a computer storage medium.
    Type: Application
    Filed: March 16, 2015
    Publication date: November 9, 2017
    Inventor: Xia FAN
  • Publication number: 20170324940
    Abstract: An image of an object under a first illuminant is captured. The color of the ambient light at a device on which the image is to be displayed is identified. The image data is adjusted to compensate for the color of the ambient light as well as for the color of the first illuminant. An image based on the adjusted image data can then be displayed on the device. As such, the desired perception of the colors in the displayed image can be managed so that image quality is maintained even if the image is displayed under different ambient lighting conditions.
    Type: Application
    Filed: May 5, 2016
    Publication date: November 9, 2017
    Inventor: Santanu DUTTA
  • Publication number: 20170324941
    Abstract: Stereoscopic imaging is performed using a mobile computing device having front- and rear-facing cameras and mounted on a rotational mechanism. The cameras and rotational mechanism are controlled, e.g., by a downloaded application, to capture images of a room or other environment using both the front- and rear-facing cameras at different pan angle and tilt angle combinations. First and second images of a portion of the room or other environment, captured using the front- and rear-facing cameras, are selected for inclusion in a stereo image pair. Obtained images and corresponding metadata are transferred to a remote system. A structure from motion pipeline is used to generate a three-dimensional model of the room or other environment. Data that enables the mobile computing device to display a three-dimensional model of the room or other environment is received from the remote system and used to display the three-dimensional model.
    Type: Application
    Filed: April 26, 2017
    Publication date: November 9, 2017
    Applicant: InsideMaps Inc.
    Inventor: Paul Joergen Birkler
  • Publication number: 20170324942
    Abstract: A camera and associated method of operation, the camera comprising a plurality of sensor systems, each sensor system comprising at least one spatial sensor and at least one image sensor, wherein at least part of a field of view of one or more or each of the sensor systems differs to at least part of the field of view of at least one or each other of the sensor systems.
    Type: Application
    Filed: May 1, 2017
    Publication date: November 9, 2017
    Inventors: Neil Tocher, Cameron Ure
  • Publication number: 20170324943
    Abstract: A driver-assistance method and a driver-assistance apparatus are provided. In the method, a movement trajectory of wheels in surroundings of a vehicle when the vehicle moves are calculated. Multiple cameras disposed on the vehicle are used to capture images of multiple perspective views surrounding the vehicle, and the images of the perspective views are transformed into images of a top view. A synthetic image surrounding the vehicle is generated according to the images of the perspective views and the top view. Finally, the synthetic image and the movement trajectories are mapped and combined to a 3D model surrounding the vehicle and a movement image including the movement trajectories having a viewing angle from an upper rear side to a lower front side of the vehicle is provided by using the 3D model when backing up the vehicle.
    Type: Application
    Filed: April 24, 2017
    Publication date: November 9, 2017
    Applicant: VIA Technologies, Inc.
    Inventors: Min-Chang Wu, Kuan-Ting Lin
  • Publication number: 20170324944
    Abstract: Even when a watching position or a watching direction of a viewer for a video changes, video display with favorable visibility is obtained. A video display apparatus that receives an input of a video input signal and that displays a video based on the video input signal includes a viewer detection unit that detects a positional relation between a screen on which the video is displayed and a viewer who watches the video and that generates viewer position information including the detection result, an image processing unit that executes image correction processing for a correction region which is such a partial region of an image based on the video input signal as being set in correspondence with viewer position information, and a video display unit that displays, on the screen, a video based on a corrected video signal having been subjected to the image correction processing.
    Type: Application
    Filed: September 17, 2015
    Publication date: November 9, 2017
    Applicant: HITACHI MAXELL, LTD.
    Inventors: Mitsuo NAKAJIMA, Nobuhiro FUKUDA, Kazuhiko TANAKA, Nobuaki KABUTO
  • Publication number: 20170324945
    Abstract: Methods and apparatus for streaming content corresponding to a 360 degree field of view are described. The methods and apparatus of the present invention are well suited for use with 3D immersion systems and/or head mounted display which allow a user to turn his or her head and see a corresponding scene portion. The methods and apparatus can support real or near real time streaming of 3D image content corresponding to a 360 degree field of view.
    Type: Application
    Filed: May 16, 2017
    Publication date: November 9, 2017
    Inventors: David Michael Cole, Alan McKay Moss
  • Publication number: 20170324946
    Abstract: A method for scanning an object having depths is provided, using a plurality of rod lenses to limit the blurring range of a contour image of an object having depths to enable an image capture unit to capture an identifiable contour image, wherein either of the diameter of each rod lens and the spacing between the rod lenses is smaller than the average width of the target. A scanning system for scanning an object having depths is also disclosed herein.
    Type: Application
    Filed: April 26, 2017
    Publication date: November 9, 2017
    Inventor: Kuo-Huei YU
  • Publication number: 20170324947
    Abstract: Systems and devices for acquiring imagery and three-dimensional (3D) models of objects are provided. An example device includes a platform configured to enable an object to be positioned thereon, and a plurality of scanners configured to capture geometry and texture information of the object when the object is positioned on the platform. A first scanner is positioned below the platform so as to capture an image of a portion of an underside of the object, a second scanner is positioned above the platform, and a third scanner is positioned above the platform and offset from a position of the second scanner. The scanners are positioned such that each scanner is outside of a field of view of other scanners. Scanners may include a camera, a light source, and a light-dampening element, and the device may include a control module configured to operate the scanners to individually scan the object.
    Type: Application
    Filed: July 24, 2017
    Publication date: November 9, 2017
    Inventors: James Robert Bruce, Arshan Poursohi
  • Publication number: 20170324948
    Abstract: A method and an apparatus for processing surrounding images of a vehicle are provided. In the method, plural cameras disposed on the vehicle are used to capture images of plural perspective views surrounding the vehicle. The images of the perspective views are transformed into images of a top view. An interval consisted of at least a preset number of consecutive empty pixels is found from one column of pixels in each image of the top view, and the images of the perspective views and the top view are divided into floor side images and wall side images according to the height of the interval in the image. The divided floor side images and wall side images are stitched to generate a synthetic image surrounding the vehicle.
    Type: Application
    Filed: July 19, 2016
    Publication date: November 9, 2017
    Inventors: Kuan-Ting Lin, Yi-Jheng Wu
  • Publication number: 20170324949
    Abstract: Systems, methods, and computer readable media to resolve three dimensional spatial information of cameras used to construct 3D images. Various embodiments perform communication synchronization between a first image capture system and one or more other image capture systems and generate a first flash pulse that projects a light pattern into an environment. An image is captured that includes the light pattern and a modulated optical signal encoded with an identifier of one of the first image capture system and related-camera information. A second flash from another image capture systems may flash at a second time based on the communication synchronization. During the second flash, the first image capture system captures a second image of the environment. Based on the first and second images, the first image capture system determines the orientation of the second image capture system relative to the first image capture system.
    Type: Application
    Filed: May 4, 2016
    Publication date: November 9, 2017
    Inventors: Denis G. Chen, Chin Han Lin
  • Publication number: 20170324950
    Abstract: Embodiments of the present application disclose various methods and an apparatus for controlling light field capture. One method for controlling light field capture comprises: determining, at least according to at least one sub-lens that affects imaging of a first region in a sub-lens array of a light field camera, at least one first sub-lens to be adjusted, the first region being a part of a scene to be shot; determining an object refocusing accuracy of a light field image section captured by the first sub-lens in a light field image of the scene to be shot; adjusting, according to the object refocusing accuracy, a light field capture parameter of the first sub-lens; and performing, based on the light field camera after being adjusted, light field capture on the scene to be shot.
    Type: Application
    Filed: September 10, 2015
    Publication date: November 9, 2017
    Inventors: LIN DU, LIANG ZHOU
  • Publication number: 20170324951
    Abstract: In a method and apparatus for processing video data, one or more processors are configured to encode a portion of stored video data in a pixel domain to generate pixel domain video data, a first graphics processing unit is configured to process the video data in a graphics domain to generate graphics domain video data, and an interface transmits the graphics domain video data and the pixel domain video data. One or more processors are configured to parse the video data into a graphics stream and an audio-video stream and decode the video data, a sensor senses movement adaptations of a user, and a second graphics processing unit is configured to generate a canvas on a spherical surface with texture information received from the graphics stream, and render a field of view based on the sensed movement adaptations of the user.
    Type: Application
    Filed: September 19, 2016
    Publication date: November 9, 2017
    Inventors: Vijayalakshmi Rajasundaram Raveendran, Mina Ayman Saleh Yanni Makar
  • Publication number: 20170324952
    Abstract: A method of calibration a video game system can include removing an optical filter that filters out visible light from a field of view of a camera to allow the camera to view visible light. The method can also include displaying a calibration image on a display screen located within the field of view of the camera, processing the video output feed from the camera with a computer processor to identify the calibration image, and calculating with the processor coordinates that represent corners of the display screen in the field of view of the camera once the calibration image is identified.
    Type: Application
    Filed: May 3, 2017
    Publication date: November 9, 2017
    Inventors: Steve Lavache, Richard Baxter, Thomas John Roberts
  • Publication number: 20170324953
    Abstract: A moving picture coding method includes: performing context adaptive binary arithmetic coding in which a variable probability value is used, on first information among multiple types of sample adaptive offset (SAO) information used for SAO that is a process of assigning an offset value to a pixel value of a pixel included in an image generated by coding the input image; and continuously performing bypass arithmetic coding in which a fixed probability value is used, on second information and third information among the multiple types of the SAO information, wherein the coded second and third information are placed after the coded first information in the bit stream.
    Type: Application
    Filed: July 21, 2017
    Publication date: November 9, 2017
    Inventors: Hisao SASAI, Kengo TERADA, Youji SHIBAHARA, Kyoko TANIKAWA, Toshiyasu SUGIO, Toru MATSUNOBU
  • Publication number: 20170324954
    Abstract: An image processing device includes a buffer for receiving encoded image data, and a processor to execute instructions that cause the processor to: decode the encoded image data from the buffer to generate quantized transform coefficient data; inversely quantize the quantized transform coefficient data using a 32×32 quantization matrix to generate predicted error data, the 32×32 quantization matrix includes a duplicate of at least one of two elements adjacent to each other from an 8×8 quantization matrix; and combine the predicted error data with a predicted image to generate decoded image data.
    Type: Application
    Filed: July 21, 2017
    Publication date: November 9, 2017
    Inventor: Kazushi SATO
  • Publication number: 20170324955
    Abstract: Disclosed are a method for determining a color difference component quantization parameter and a device using the method. Method for decoding an image can comprise the steps of: decoding a color difference component quantization parameter offset on the basis of size information of a transform unit; and calculating a color difference component quantization parameter index on the basis of the decoded color difference component quantization parameter offset. Therefore, the present invention enables effective quantization by applying different color difference component quantization parameters according to the size of the transform unit when executing the quantization.
    Type: Application
    Filed: July 21, 2017
    Publication date: November 9, 2017
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Sung Chang LIM, Hui Yong KIM, Se Yoon JEONG, Jong Ho KIM, Ha Hyun LEE, Jin Ho LEE, Jin Soo CHOI, Jin Woong KIM
  • Publication number: 20170324956
    Abstract: Provided is a method for encoding an image using a depth information that includes selecting a current coding unit (CU); verifying an object information of the current CU from an object information obtained from a depth image; and verifying whether or not the current CU is composed of a single object based on the object information, and predicting a division structure of the current CU according to whether or not the current CU is composed of a single object.
    Type: Application
    Filed: July 26, 2017
    Publication date: November 9, 2017
    Applicant: UNIVERSITY-INDUSTRY COOPERATION GROUP OF KYUNG HEE UNIVERSITY
    Inventors: Gwang Hoon PARK, Tae Wook KIM, Yoon Jin LEE
  • Publication number: 20170324957
    Abstract: Disclosed are image quantization parameter decoding method and systems for decoding a quantization parameter for a video decoding process that is based on context-based adaptive binary arithmetic coding. In one embodiment, an image quantization parameter decoding method includes binary-arithmetic-decoding a first bin indicating whether or not a delta quantization parameter is significant, other bins, which are subsequent to the first bin, indicating an absolute value of the delta quantization parameter, and a sign bin, which is subsequent to the other bins, indicating whether the delta quantization parameter is positive or negative. The method further includes generating a delta quantization parameter by de-binarizing the first bin, the other bins and the sign bin. The method further includes generating a re-constructed quantization parameter by adding a predicted quantization parameter to the delta quantization parameter.
    Type: Application
    Filed: July 20, 2017
    Publication date: November 9, 2017
    Inventors: Keiichi CHONO, Hirofumi Aoki
  • Publication number: 20170324958
    Abstract: Methods and systems for compressing images include generating custom quantization tables for quantizing frequency information associated with an image. Specifically, one or more embodiments determine acceptable error percentages during compression of a digital image based on content of the digital image. For example, the acceptable error percentages are defined by compression error thresholds that limit how much error a quantizer in a quantization table can introduce during compression of the digital image. One or more embodiments generate the custom quantization table by determining quantizer values that produce compression errors that meet the compression error thresholds. One or more embodiments compress the digital image using the custom quantization table.
    Type: Application
    Filed: May 9, 2016
    Publication date: November 9, 2017
    Inventors: Tarun Tandon, Mohd. Yawar Nihal Siddiqui, Kshitiz Bakshi
  • Publication number: 20170324959
    Abstract: A method and an apparatus for coding at least one high dynamic range picture into a coded bitstream, and corresponding decoding method and apparatus are disclosed. The encoding method includes selecting a predetermined post-processing color correction function bp_det among a set of predetermined post-processing color correction functions bpset, according to at least one parameter computed from at least said high dynamic range picture, determining a pre-processing color correction function b0 from the selected predetermined post-processing color correction function bp_det, decomposing the high dynamic range picture, into a standard dynamic range picture, using the pre-processing color correction function b0, coding the standard dynamic range picture into the coded bitstream, coding at least one parameter for reconstructing the high dynamic range picture from the standard dynamic range picture decoded from the coded bitstream and a post-processing color correction function bp_dec.
    Type: Application
    Filed: April 29, 2017
    Publication date: November 9, 2017
    Inventors: Yannick OLIVIER, Francois CELLIER, Christophe CHEVANCE, David TOUZE, Edouard FRANCOIS
  • Publication number: 20170324960
    Abstract: The present invention relates to an entropy decoding method which includes: generating context related to a bin that forms a codeword of a syntax element; and performing arithmetic decoding of the bin based on the context.
    Type: Application
    Filed: July 6, 2017
    Publication date: November 9, 2017
    Inventors: Jaehyun Lim, Byeongmoon Jeon, Yongjoon Jeon, Seungwook Park, Jungsun Kim, Joonyoung Park, Hendry Hendry, Naeri Park, Chulkeun Kim
  • Publication number: 20170324961
    Abstract: Disclosed is a method for predicting depth map coding distortion of a two-dimensional free viewpoint video, including: inputting sequences of texture maps and depth maps of two or more viewpoint stereoscopic videos; synthesizing a texture map of a first intermediate viewpoint of a current to-be-coded viewpoint and a first adjacent viewpoint, and synthesizing a texture map of a second intermediate viewpoint of the current to-be-coded viewpoint and a second adjacent viewpoint by using a view synthesis algorithm; recording a synthetic characteristic of each pixel according to the texture map and generating a distortion prediction weight; and calculating to obtain total distortion according to the synthetic characteristic and the distortion prediction weight.
    Type: Application
    Filed: July 26, 2017
    Publication date: November 9, 2017
    Inventors: Xin JIN, Chenyang LI, Qionghai DAI
  • Publication number: 20170324962
    Abstract: A video decoder is configured to, for a group of video blocks of the video data, determine a number of merged groups for a plurality of classes is equal to one merged group; receive a first flag indicating that filter coefficient information for at least one merged group is not coded in the video data; receive for the one merged group, a second flag, wherein a first value for the second flag indicates that filter coefficient information mapped to the one merged group is coded in the video data, and wherein a second value for the second flag indicates that the filter coefficient information mapped to the one merged group is all zero values; determine the second flag is equal to the second value; and determine one or more filters from the set of filters using the all zero values.
    Type: Application
    Filed: May 8, 2017
    Publication date: November 9, 2017
    Inventors: Marta Karczewicz, Li Zhang, Wei-Jung Chien
  • Publication number: 20170324963
    Abstract: A method and an apparatus of encoding/decoding intra prediction mode using a plurality of candidate intra prediction modes are disclosed. The method includes deriving three candidate intra prediction modes about a current block and deriving an intra prediction mode of the current block.
    Type: Application
    Filed: July 26, 2017
    Publication date: November 9, 2017
    Inventor: Sun Young LEE
  • Publication number: 20170324964
    Abstract: A method and system may identify a video data block using a video codec and apply a transform kernel of a butterfly asymmetric discrete sine transform (ADST) to the video data block in a pipeline.
    Type: Application
    Filed: July 17, 2017
    Publication date: November 9, 2017
    Inventors: Jingning HAN, Yaowu XU, Debargha MUKHERJEE
  • Publication number: 20170324965
    Abstract: Methods and systems are provided for image processing. A plurality of correlation parameters representing degrees of correlation between two or more images of a plurality of images may be produced. An optimized correlation dependency graph may be produced according to the plurality of correlation parameters. The plurality of images may then be delta encoded according to the optimized correlation dependency graph. For example, the optimized correlation dependency graph may be used for performing a correlation encoding operation. The plurality of correlation parameters may be produced, for example, in accordance with one or more correlation metrics associated with the correlation encoding operation.
    Type: Application
    Filed: July 27, 2017
    Publication date: November 9, 2017
    Inventors: Urs-Viktor Marti, Denis Schlauss, Lukas Hohl, Beat Herrmann
  • Publication number: 20170324966
    Abstract: A method performed by a video encoder for encoding a current picture belonging to a temporal level identified by a temporal_id. The method includes determining a Reference Picture Set (RPS) for the current picture indicating reference pictures that are kept in a decoded picture buffer (DPB) when decoding the current picture, and when the current picture is a temporal switching point. The method further comprises operating to ensure that the RPS of the current picture includes no picture having a temporal_id greater than or equal to the temporal_id of the current picture.
    Type: Application
    Filed: July 27, 2017
    Publication date: November 9, 2017
    Inventors: Rickard SJÖBERG, Jonatan SAMUELSSON
  • Publication number: 20170324967
    Abstract: A method for controlling bitstream decoding is provided. The bitstream includes a plurality of frames. The method includes: generating a performance indicator according to a decoding time of at least one previous frame; generating a dropping decision according to the performance indicator, wherein the dropping decision indicates whether it is needed to drop a frame; and determining whether to drop a current frame according to the dropping decision.
    Type: Application
    Filed: November 2, 2016
    Publication date: November 9, 2017
    Inventors: Ya-Ting Yang, Yi-Shin Tung
  • Publication number: 20170324968
    Abstract: The invention provides a video codec. In one embodiment, the video codec is coupled to an outer memory storing a reference frame, and comprises an interface circuit, an in-chip memory, a motion estimation circuit, and a controller. The interface circuit obtains in-chip data from the reference frame stored in the outer memory. The in-chip memory stores the in-chip data. The motion estimation circuit retrieves search window data from the in-chip data with a search window, and performs a motion estimation process on a current macroblock according to the search-window data. The controller shifts the location of the search window when the current macroblock is shifted, marks a macroblock shifted out from the search window as an empty macroblock, and controls the interface circuit to obtain an updated macroblock for replacing the empty macroblock in the in-chip memory from the reference frame stored in the outer memory.
    Type: Application
    Filed: July 27, 2017
    Publication date: November 9, 2017
    Inventors: Zhichong CHEN, Jinfeng ZHOU, Jianbin HE, Liu YANG, Qiang LI
  • Publication number: 20170324969
    Abstract: A system for decoding a video bitstream includes receiving a frame of the video that includes at least one slice and at least one tile and where each of the at least one slice and the at least one tile are not all aligned with one another.
    Type: Application
    Filed: May 23, 2017
    Publication date: November 9, 2017
    Inventors: Seung-Hwan KIM, Christopher A. SEGALL, Jie ZHAO
  • Publication number: 20170324970
    Abstract: Encoded data is decoded based on tile data division information, tile data position information, block line data division information, and block line data position information. The tile data division information indicates whether the encoded data is composed of tile data items that serve as encoded data items of tiles. The tile data position information indicates positions of the tile data items. The block line data division information indicates whether each tile data item is composed of first block line data and second block line data. The first block line data serves as encoded data of a first block line that is a set of blocks arranged linearly. The second block line data serves as encoded data of a second block line next to the first block line. The block line data position information indicates a position of the second block line data.
    Type: Application
    Filed: July 24, 2017
    Publication date: November 9, 2017
    Inventor: Koji Okawa
  • Publication number: 20170324971
    Abstract: Disclosed are techniques for creating, coding, decoding, and using, rotation information related to one or more coded pictures in non-normative parts of a coded video bitstream.
    Type: Application
    Filed: July 21, 2017
    Publication date: November 9, 2017
    Applicant: VIDYO, INC.
    Inventors: Jill Boyce, Stephen Cipolli, Jonathan Lennox, Stephan Wenger, Danny Hong
  • Publication number: 20170324972
    Abstract: The present invention relates to a video signal decoding method for adding an intra prediction mode as a sub-macroblock type to prediction of a macroblock in coding a video signal. Some implementations may include obtaining a macroblock type, when a macroblock includes the intra prediction coded sub-macroblock and the inter prediction coded sub-macroblock based on the macroblock type, obtaining prediction mode flag information indicating whether the sub-macroblock is the intra prediction coded or the inter prediction coded, and obtaining a prediction value of the sub-macroblock. Accordingly, implementations disclosed herein may raise coding efficiency of video signal by adding an intra prediction mode as a sub-macroblock type in predicting a macroblock.
    Type: Application
    Filed: July 26, 2017
    Publication date: November 9, 2017
    Inventors: Seung Wook PARK, Jung Sun Kim, Young Hee Choi, Byeong Moon Jeon, Joon Young Park
  • Publication number: 20170324973
    Abstract: Techniques related to video coding with a multi-pass prediction mode decision pipeline.
    Type: Application
    Filed: May 5, 2016
    Publication date: November 9, 2017
    Inventors: JASON TANNER, JAY PATEL
  • Publication number: 20170324974
    Abstract: An image processing apparatus and an image processing method thereof are provided. A shared storage unit of a motion estimation and motion compensation apparatus captures frame data of a storage unit through a bus. A motion vector estimation unit and a motion compensation unit capture image data for executing a motion vector estimation operation and a motion compensation operation from the sharing storage unit.
    Type: Application
    Filed: May 6, 2016
    Publication date: November 9, 2017
    Inventor: Der-Wei Yang
  • Publication number: 20170324975
    Abstract: An encoding method and apparatus and a decoding method and apparatus for determining a motion vector of a current block based on a motion vector of at least one previously-encoded or previously-decoded block are provided. The decoding method includes: decoding information regarding a prediction direction from among a first direction, a second direction, and bi-directions, and information regarding pixel values of the current block; determining the prediction direction in which the current block is to be predicted, based on the decoded information regarding the prediction direction, and determining a motion vector for predicting the current block in the determined prediction direction; and restoring the current block, based on the determined motion vector and the decoded information regarding the pixel values, wherein the first direction is a direction from a current picture to a previous picture, and the second direction is a direction from the current picture to a subsequent picture.
    Type: Application
    Filed: July 21, 2017
    Publication date: November 9, 2017
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Tammy LEE, Woo-jin HAN
  • Publication number: 20170324976
    Abstract: When a block (MB22) of which motion vector is referred to in the direct mode contains a plurality of motion vectors, 2 motion vectors MV23 and MV24, which are used for inter picture prediction of a current picture (P23) to be coded, are determined by scaling a value obtained from averaging the plurality of motion vectors or selecting one of the plurality of the motion vectors.
    Type: Application
    Filed: July 3, 2017
    Publication date: November 9, 2017
    Inventors: Satoshi KONDO, Shinya KADONO, Makoto HAGAI, Kiyofumi ABE