Patents Issued in November 12, 2020
  • Publication number: 20200358977
    Abstract: An object is to reduce a circuit scale in a solid-state imaging element that detects an address event. The solid-state imaging element is provided with a plurality of photoelectric conversion elements, a signal supply unit, and a detection unit. In this solid-state imaging element, each of the plurality of photoelectric conversion elements photoelectrically converts incident light to generate a first electric signal. Furthermore, in the solid-state imaging element, the detection unit detects whether or not a change amount of the first electric signal of each of the plurality of photoelectric conversion elements exceeds a predetermined threshold and outputs a detection signal indicating a result of the detection result.
    Type: Application
    Filed: January 18, 2019
    Publication date: November 12, 2020
    Inventors: Atsumi Niwa, Yusuke Oike
  • Publication number: 20200358978
    Abstract: A display apparatus includes a display panel including a display area configured to display an image, and a non-display area, at least one first sound generator in the display area, and at least one second sound generator in the non-display area, wherein each of the at least one first sound generator and the at least one second sound generator is configured to vibrate the display panel to generate sound toward a front of the display panel.
    Type: Application
    Filed: July 28, 2020
    Publication date: November 12, 2020
    Applicant: LG Display Co., Ltd.
    Inventors: Sungtae LEE, SeYoung KIM, KwanHo PARK, YeongRak CHOI, Kwangho KIM, Sungsu HAM
  • Publication number: 20200358979
    Abstract: Systems and methods can support a data processing apparatus. The data processing apparatus can include a data processor that is associated with a data capturing device on a stationary object and/or a movable object. The data processor can receive data in a data flow from one or more data sources, wherein the data flow is configured based on a time sequence. Then, the data processor can receive a control signal, which is associated with a first timestamp, wherein the first timestamp indicates a first time. Furthermore, the data processor can determine a first data segment by applying the first timestamp on the data flow, wherein the first data segment is associated with a time period in the time sequence that includes the first time.
    Type: Application
    Filed: July 27, 2020
    Publication date: November 12, 2020
    Applicant: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Sheldon SCHWARTZ, Tao Wang, Mingyu Wang, Zisheng Cao
  • Publication number: 20200358980
    Abstract: A processing load on a reception side when subtitle graphics data is superimposed on video data is alleviated. A video stream including video data is generated. A subtitle stream including bitmap data is generated, the bitmap data being obtained by converting subtitle graphics data. A container having a predetermined format containing the video stream and the subtitle stream is transmitted. The subtitle stream includes a bitmap conversion table containing conversion information of a color gamut and/or a luminance. On the reception side, subtitle graphics data having characteristics matched with those of target video data of a superimposition destination can be easily obtained by just converting the bitmap data to the subtitle graphics data by using the bitmap conversion table.
    Type: Application
    Filed: July 23, 2020
    Publication date: November 12, 2020
    Applicant: SONY CORPORATION
    Inventor: Ikuo TSUKAGOSHI
  • Publication number: 20200358981
    Abstract: An electronic device includes a camera; a display; at least one sensor; a communication unit configured to establish wireless communication with another electronic device using at least one protocol; and a processor configured to be functionally connected to the camera, the display, the at least one sensor, and the communication unit, wherein the processor is configured to perform a call with the other electronic device, detect a state change of the electronic device based on sensing information sensed by the at least one sensor while the call is maintaining, determine whether the state change of the electronic device corresponds to a user gesture for switching a call mode, and in response to determining that the state change or the electronic device corresponds to the user gesture for switching the call mode, switch the call mode.
    Type: Application
    Filed: May 26, 2020
    Publication date: November 12, 2020
    Inventors: Wonsik Lee, Jongkyun Shin, Hyunyeul Lee, Pragam Rathore, Yang-Hee Kwon, Young-Rim Kim, June-Seok Kim, Jinho Song, Ji-In Won, Dong Oh Lee, Sunjung Lee, Jingoo Lee, Taik Heon Rhee, Wan-Soo Lim, Sung-Bin Jeon, Seungyeon Chung, Kyuhyung Choi, Taegun Park, Dong-Hyun Yeom, Suha Yoon, Euichang Jung, Cheolho Cheong
  • Publication number: 20200358982
    Abstract: A video conference system, a video conference apparatus and a video conference method are provided. The video conference system includes a video conference apparatus and a display apparatus. The video conference apparatus includes an image detection device, a sound source detection device, and a processor. The image detection device obtains a conference image of a conference space. When the sound source detection device detects a sound generated by a sound source in the conference space, the sound source detection device outputs a positioning signal. The processor receives the positioning signal, and determines whether a real face image exists in a sub-image block of the conference image corresponding to the sound source according to the positioning signal to output the image signal. The display apparatus displays a close-up conference image including the real face image according to the image signal.
    Type: Application
    Filed: May 5, 2020
    Publication date: November 12, 2020
    Applicant: Optoma Corporation
    Inventors: Yuan-Mao Tsui, Shou-Hsiu Hsu, Yu-Cheng Lee
  • Publication number: 20200358983
    Abstract: One variation of a method for video conferencing includes, at a first device associated with a first user: capturing a first video feed; representing constellations of facial landmarks, detected in the first video feed, in a first feed of facial landmark containers; and transmitting the first feed of facial landmark containers to a second device. The method further includes, at the second device associated with a second user: accessing a first face model representing facial characteristics of the first user; accessing a synthetic face generator; transforming the first feed of facial landmark containers and the first face model into a first feed of synthetic face images according to the synthetic face generator; and rendering the first feed of synthetic face images.
    Type: Application
    Filed: May 8, 2020
    Publication date: November 12, 2020
    Inventors: Yousif Astarabadi, Matt Mireles, Shaun Astarabadi
  • Publication number: 20200358984
    Abstract: The present disclosure relates to a method of providing at least one image of at least one real object captured by at least one scene camera of a plurality of scene cameras mounted on a vehicle. The method includes: providing camera poses of respective scene cameras of the plurality of scene cameras relative to a reference coordinate system associated with the vehicle, providing user attention data related to a user captured by an information capturing device, providing at least one attention direction relative to the reference coordinate system from the user attention data, determining at least one of the scene cameras among the plurality of scene cameras according to the at least one attention direction and the respective camera pose of the at least one of the scene cameras, and providing at least one image of at least one real object captured by the at least one of the scene cameras.
    Type: Application
    Filed: July 24, 2020
    Publication date: November 12, 2020
    Inventors: Lejing Wang, Thomas Alt
  • Publication number: 20200358985
    Abstract: A photography control method includes temporarily saving one or more sets of photography data, taken by a photographing camera, in a temporary photography data storage unit, acquiring identification information identifying a subject, extracting, from the one or more sets of photography data temporary saved in the temporary photography data storage unit, at least one set of photography data corresponding to the identification information acquired in the acquiring of identification information; and saving the extracted at least one set of photography data in a photography data storage unit in a manner associated with the identification information of the subject.
    Type: Application
    Filed: July 29, 2020
    Publication date: November 12, 2020
    Applicant: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventor: Kazuhiko YAMASHITA
  • Publication number: 20200358986
    Abstract: A display control device includes a detection unit that detects a condition of a moving body and a control unit that controls a display mode of a display unit having a transmissive mode in which a landscape outside the moving body is displayed on at least a part of a display screen, on the basis of a detection result of the detection unit. The display unit has the transmissive mode and a non-transmissive mode in which a content movie is displayed by superimposing the content movie onto at least a part of the landscape outside the moving body or on the entire display screen, and the control unit sets the display unit to either the transmissive mode or the non-transmissive mode according to a detection result of the detection unit.
    Type: Application
    Filed: July 30, 2020
    Publication date: November 12, 2020
    Inventors: KAZUMA YOSHII, YOSHINORI NASADA, KOJI NAGATA, TAKEHIKO TAHIRA, SHO TANAKA
  • Publication number: 20200358987
    Abstract: A trauma scene monitoring system includes a medic-worn illumination device, a casualty-worn informatics system, and a remote monitoring station. The illumination device includes a frame with boom-mounted light sources positioned below the wearer's eyes near the zygomatic bones, thus orienting the light sources to project light in the direction of the wearer's view. Also included are audio/video means to capture audio/video information from a scene attended by the medic, and a telemetry unit to transmit that information to the remote monitoring station. The casualty-worn informatics system is integrated within a headband worn by a monitored individual. The informatics system includes sensors to provide the monitored individual's vital statistics and a telemetry unit to transmit data concerning the monitored individual to the remote monitoring station.
    Type: Application
    Filed: May 4, 2020
    Publication date: November 12, 2020
    Inventor: Jeremy B. Ross
  • Publication number: 20200358988
    Abstract: A surveillance duo that includes a pod and a rover.
    Type: Application
    Filed: July 28, 2020
    Publication date: November 12, 2020
    Applicant: C-Tonomy, LLC
    Inventors: Stephen W. ELLIS, Basil I. JESUDASON, John E. DOLAN
  • Publication number: 20200358989
    Abstract: The present technology relates to a solid-state imaging device and an electronic apparatus that enable simultaneous acquisition of a signal for generating a high dynamic range image and a signal for detecting a phase difference. The solid-state imaging device includes a plurality of pixel sets each including color filters of the same color, for a plurality of colors, each pixel set including a plurality of pixels. Each pixel includes a plurality of photodiodes PD. The present technology can be applied, for example, to a solid-state imaging device that generates a high dynamic range image and detects a phase difference, and the like.
    Type: Application
    Filed: November 12, 2018
    Publication date: November 12, 2020
    Inventor: Kozo Hoshino
  • Publication number: 20200358990
    Abstract: An imaging system includes an image combiner, at least one reflecting mirror, an image generating device, a communication module and a distance sensor. The at least one reflecting mirror is disposed with respect to the image combiner. The image generating device is disposed with respect to the at least one reflecting mirror. When the image generating device displays an image, a light projected by the image generating device is reflected by the at least one reflecting mirror to the image combiner, so as to form a virtual image through the image combiner. The distance sensor senses a distance between an object and the imaging system and transmits the distance to the image generating device through the communication module. When the image generating device determines that the distance is larger than a predetermined threshold, the image generating device adjusts a display size of the image according to the distance.
    Type: Application
    Filed: April 21, 2020
    Publication date: November 12, 2020
    Inventors: Tsung-Hsun Wu, Wei-Chun Chang
  • Publication number: 20200358991
    Abstract: A projection system, a projection image adjusting method and a projector are provided. The projection system includes a projector and a control unit. The projector projects a projection image. The control unit controls movement of an image grid point of the projection image projected by the projector. When the projector receives a continuous adjustment signal output by the control unit, the projector determines a cumulative displacement amount proportional to a continuous signal quantity cumulative value according to the continuous signal quantity cumulative value of the continuous adjustment signal, and the image grid point is moved by the projector according to the cumulative displacement amount to correspondingly deform at least a part of the projection image, so as to provide a convenient projection image adjustment effect.
    Type: Application
    Filed: May 6, 2020
    Publication date: November 12, 2020
    Applicant: Coretronic Corporation
    Inventors: Chun-Lin Chien, Yu-Kuan Chang
  • Publication number: 20200358992
    Abstract: There is provided with an image processing apparatus that generates a display image to be displayed in a display system including a display area. An obtaining unit obtains one input image acquired through shooting by one image capturing apparatus. A generating unit generates the display image from the input image on the basis of a correspondence between a first projection plane corresponding to the input image and a second projection plane corresponding to the display area.
    Type: Application
    Filed: July 30, 2020
    Publication date: November 12, 2020
    Inventor: Masatoshi Ishii
  • Publication number: 20200358993
    Abstract: At least one embodiment obtain a method for reconstructing an image from metadata representing a given color space, comprising mapping said given color space to a candidate color space which encompasses in its convex hull all the primaries of the given color space; and reconstructing the image using said candidate color space.
    Type: Application
    Filed: December 11, 2018
    Publication date: November 12, 2020
    Inventors: Pierre ANDRIVON, David TOUZE, Catherine SERRE
  • Publication number: 20200358994
    Abstract: The disclosure extends to methods, systems, and computer program products for producing an image in light deficient environments with luminance and chrominance emitted from a controlled light source.
    Type: Application
    Filed: July 27, 2020
    Publication date: November 12, 2020
    Applicant: Depuy Synthes Products, Inc.
    Inventors: Laurent Blanquart, John Richardson
  • Publication number: 20200358995
    Abstract: There is provided a reproduction apparatus, a reproduction method, a program, and a recording medium that can prevent an unnatural change in luminance in a case where other information is displayed by being superimposed on a video. A reproduction apparatus according to an aspect of the present technology receives a video stream to which dynamic metadata including luminance information of a picture has been added, and in a case where predetermined information is superimposed and displayed on the picture, adds metadata indicated by a flag indicating the metadata used for luminance adjustment while the predetermined information is superimposed and displayed to the picture constituting the received video stream, and outputs the picture to which the metadata has been added to a display apparatus. The present technology can be applied to a Blu-ray (registered trademark) Disc player.
    Type: Application
    Filed: October 17, 2018
    Publication date: November 12, 2020
    Applicant: SONY CORPORATION
    Inventor: Kouichi UCHIMURA
  • Publication number: 20200358996
    Abstract: Provided is a real-time aliasing rendering method for a 3D VR video and a virtual three-dimensional scene, including: capturing 3D camera video signals in real time and process the same to generate texture data; creating a virtual three-dimensional scene according to the proportion of a real scene; generating virtual camera rendering parameters according to a physical position of the 3D camera and a shooting angle relationship; aliasing the texture data onto a virtual three-dimensional object in a virtual scene, and adjusting the position of the virtual three-dimensional object according to a physical positional relationship between the virtual three-dimensional scene and the real scene, so as to form a complete virtual reality combined three-dimensional scene; rendering the virtual reality combined three-dimensional scene by using the virtual camera rendering parameters to obtain a simulated rendering picture.
    Type: Application
    Filed: September 4, 2017
    Publication date: November 12, 2020
    Inventor: Bin Cheng
  • Publication number: 20200358997
    Abstract: A method for automatic registration of 3D image data, captured by a 3D image capture system having an RGB camera and a depth camera, includes capturing 2D image data with the RGB camera at a first pose; capturing depth data with the depth camera at the first pose; performing an initial registration of the RGB camera to the depth camera; capturing 2D image data with the RGB camera at a second pose; capturing depth data at the second pose; and calculating an updated registration of the RGB camera to the depth camera.
    Type: Application
    Filed: July 27, 2020
    Publication date: November 12, 2020
    Inventors: Patrick O'Keefe, Jeffrey Roger Powers, Nicolas Burrus
  • Publication number: 20200358998
    Abstract: An image processing apparatus includes an acquisition unit which acquires a parallax image generated based on a signal of a photoelectric converter among a plurality of photoelectric converters which receive light beams passing through partial pupil regions of an imaging optical system different from each other, and acquires a captured image generated by combining signals of the plurality of photoelectric converters, and an image processing unit which performs correction process so as to reduce a defect included in the parallax image based on the captured image.
    Type: Application
    Filed: July 24, 2020
    Publication date: November 12, 2020
    Inventor: Koichi Fukuda
  • Publication number: 20200358999
    Abstract: A baseline adjustment method includes acquiring a first image of an object by a first imaging device of an imaging apparatus, acquiring a second image of the object by a second imaging device of the imaging apparatus, determining a binocular disparity between the first and second image and determining an object distance based at least on the binocular disparity by the controller, and automatically adjusting a baseline according to the object distance by a baseline adjustment mechanism. The object distance represents a distance between the imaging apparatus and the object. The baseline represents a relative distance between the first and second imaging device. The baseline is being adjusted to fall within a range approximately between a minimum and a maximum baseline. The maximum baseline is defined by the object distance and an angle-of-view of at least one of the first or the second imaging device.
    Type: Application
    Filed: July 27, 2020
    Publication date: November 12, 2020
    Inventors: Guyue ZHOU, Ketan TANG, Xingyu ZHANG, Cong ZHAO
  • Publication number: 20200359000
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method may include classifying a multiplicity of view videos into a base view and an additional view, generating a residual video for the additional view video classified as an additional view, packing a patch, which is generated based on the residual video, into an atlas video, and generating metadata for the patch.
    Type: Application
    Filed: March 20, 2020
    Publication date: November 12, 2020
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Sang Woon KWAK, Kug Jin YUN, Jun Young JEONG
  • Publication number: 20200359001
    Abstract: A camera module includes a circuit board, two photosensitive chips fixed on a surface of the circuit board, two lens assemblies respectively mounted over the two photosensitive chips, two filter assemblies each including a visible light filter and an infrared filter, and an infrared projection unit fixed on a surface of the circuit board and projecting patterned infrared light. The filter assemblies respectively correspond to the photosensitive chips and the lens assemblies. The visible light filter and the infrared filter of the filter assemblies are switched to be between the lens assembly and the photosensitive chip. When the visible light filters are between the lenses and the photosensitive chips, the photosensitive chips acquire visible light to form a colored 3D image. When the infrared filters are between the lenses and the photosensitive chips, the photosensitive chips acquire reflected patterned infrared light to form an infrared 3D image.
    Type: Application
    Filed: May 30, 2019
    Publication date: November 12, 2020
    Inventors: YI-MOU HUANG, YE-QUANG CHEN, SHIN-WEN CHEN, YU-JUNG CHEN, HO-KAI LIANG
  • Publication number: 20200359002
    Abstract: An imaging apparatus including an imaging lens, and an image sensor array of first and second image sensor units, wherein a single first image sensor unit includes a single first microlens and a plurality of image sensors, a single second image sensor unit includes a single second microlens and a single image sensor, light passing through the imaging lens and reaching each first image sensor unit passes through the first microlens and forms an image on the image sensors constituting the first image sensor unit, light passing through the imaging lens and reaching each second image sensor unit passes through the second microlens and forms an image on the image sensor constituting the second image sensor unit, an inter-unit light shielding layer is formed between the image sensor units, and a light shielding layer is not formed between the image sensor units constituting the first image sensor unit.
    Type: Application
    Filed: July 27, 2020
    Publication date: November 12, 2020
    Applicant: Sony Semiconductor Solutions Corporation
    Inventor: Tomohiro Yamazaki
  • Publication number: 20200359003
    Abstract: A system and method of scanning an environment and acquiring an image is provided. The system includes a mobile device having a camera and a first position indicator. A scanner having a light emitter and a light receiver is provided. The scanner determining coordinates of surfaces in an environment in response to emitting light with the light emitter and receiving light with the light receiver, the scanner having a second position indicator. One or more processors are provided that determine the position of the mobile computing device and transmits the data between the scanner in response to the first position indicator engaging the second position indicator.
    Type: Application
    Filed: July 23, 2020
    Publication date: November 12, 2020
    Inventors: Ahmad Ramadneh, Aleksej Frank, Joao Santos, Oliver Zweigle
  • Publication number: 20200359004
    Abstract: Techniques for capturing three-dimensional image data of a scene and processing light field image data obtained by an optical wavefront sensor in 3D imaging applications are provided. The disclosed techniques provide a depth map of an observable scene from light field information about an optical wavefront emanating from the scene, and make use of color filters forming a color mosaic defining a primary color and one or more secondary colors, and color radial transfer functions calibrated to provide object distance information from the spatio-spectrally sampled pixel data.
    Type: Application
    Filed: December 5, 2018
    Publication date: November 12, 2020
    Inventors: Jonathan Ikola Saari, Ji-ho Cho
  • Publication number: 20200359005
    Abstract: A third imaging unit including a pixel not having a polarization characteristic is interposed between a first imaging unit and a second imaging unit including a pixel having a polarization characteristic for each of a plurality of polarization directions. A depth map is generated from a viewpoint of the first imaging unit by matching processing using a first image generated by the first imaging unit and a second image generated by the second imaging unit. A normal map is generated on the basis of a polarization state of the first image. Integration processing of the depth map and the normal map is performed and a depth map with a high accuracy is generated. The depth map generated by the map integrating unit is converted into a map from a viewpoint of the third imaging unit, and an image free from deterioration can be generated.
    Type: Application
    Filed: July 29, 2020
    Publication date: November 12, 2020
    Applicant: SONY CORPORATION
    Inventors: Yasutaka HIRASAWA, Yuhi KONDO, Ying LU, Ayaka NAKATANI
  • Publication number: 20200359006
    Abstract: A display device to be disposed in front of eyes of a user includes a display unit having a right-eye region and a left-eye region, a detector configured to detect detection information enabling estimation of a direction of a line of sight of the user, a setting unit configured to set display region information indicating display regions of the right-eye and left-eye regions, and a controller configured to output the display region information and the detection information to a control device. After receiving an image including a right-eye image and a left-eye image corresponding to the display regions indicated by the display region information from the control device, the controller displays the received right-eye image in the display region of the right-eye region indicated by the display region information, and display the received left-eye image in the display region of the left-eye region indicated by the display region information.
    Type: Application
    Filed: July 29, 2020
    Publication date: November 12, 2020
    Inventors: Toshihiro YANAGI, Kei TAMURA
  • Publication number: 20200359007
    Abstract: [Overview] [Problem to be Solved] To provide an information processing apparatus and an information processing method. [Solution] An information processing apparatus including: a receiving unit that receives a request including load information regarding a load; and a sending unit that sends a data set in accordance with the request. The data set includes three-dimensional shape data, and left-eye texture data and right-eye texture data. The three-dimensional shape data has a vertex count corresponding to the load information. The left-eye texture data and the right-eye texture data correspond to the three-dimensional shape data.
    Type: Application
    Filed: October 25, 2018
    Publication date: November 12, 2020
    Inventor: NOBUAKI IZUMI
  • Publication number: 20200359008
    Abstract: There is provided an information processing apparatus capable of presenting an object that is easy to be in a fusion state where connection is natural regardless of a distance, an information processing method, and a recording medium. In the information processing apparatus, a left-eye image and a right-eye image are output to acquire position information of an object in a depth direction to be perceived by a user, and a luminance correction region to be subjected to luminance correction is set to at least one of a first display region that is included in a display region for the left-eye image and overlaps a display region for the right-eye image, or a second display region that is included in the display region for the right-eye image and overlaps the display region for the left-eye image on the basis of the position information. The present technology can be applied to, for example, a transmissive HMD.
    Type: Application
    Filed: December 13, 2018
    Publication date: November 12, 2020
    Applicant: SONY CORPORATION
    Inventors: Kayoko TANAKA, Kazuma AIKI, Hiroshi YUASA, Satoshi NAKANO
  • Publication number: 20200359009
    Abstract: Provided is an apparatus, method, and computer-readable recording medium for analyzing audio/video (AV) output, capable of automatically analyzing an AV output of a sink device. The apparatus for analyzing an AV output includes: a transmitter configured to transmit a high-definition multimedia interface (HDMI) signal generation command to a source device such that an AV screen is output on a sink device; a receiver configured to receive, from a user terminal, data of a mirroring screen corresponding to the AV screen being output on the sink device, and a controller configured to perform analysis by comparing the received data of the mirroring screen with reference data stored in a memory to analyze a responsiveness for a HDMI signal of a HDMI port provided in the sink device.
    Type: Application
    Filed: September 27, 2018
    Publication date: November 12, 2020
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hyeong Ik KIM, Tae Young YANG
  • Publication number: 20200359010
    Abstract: A video encoding method and apparatus and a video decoding method and apparatus are provided. The video encoding method includes: prediction encoding in units of a coding unit as a data unit for encoding a picture, by using partitions determined based on a first partition mode and a partition level, so as to select a partition for outputting an encoding result from among the determined partitions; and encoding and outputting partition information representing a first partition mode and a partition level of the selected partition. The first partition mode represents a shape and directionality of a partition as a data unit for performing the prediction encoding on the coding unit, and the partition level represents a degree to which the coding unit is split into partitions for detailed motion prediction.
    Type: Application
    Filed: July 27, 2020
    Publication date: November 12, 2020
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Chang-Hyun LEE, Tammy Lee, Jianle Chen, Dae-sung Cho, Woo-jin Han, Il-koo Kim
  • Publication number: 20200359011
    Abstract: Aspects of the disclosure provide methods and apparatuses for video encoding/decoding. In some examples, an apparatus for video decoding includes receiving circuitry and processing circuitry. The processing circuitry decodes prediction information of a current block from a coded video bitstream. The prediction information is indicative of an inter prediction mode and a usage of a position dependent prediction combination (PDPC) in the inter prediction mode. Then, the processing circuitry calculates an intermediate value for a sample in the current block based on neighboring samples of the current block that are selected based on a position of the sample, and combines the intermediate value for the sample with an inter prediction value of the sample to reconstruct the sample.
    Type: Application
    Filed: April 27, 2020
    Publication date: November 12, 2020
    Applicant: Tencent America LLC
    Inventors: Liang Zhao, Xiang Li, Xin Zhao, Shan Liu
  • Publication number: 20200359012
    Abstract: Provided is a method of decoding a video according to an embodiment, the method including determining at least one processing block for splitting the video; determining an order of determining at least one largest coding unit in the at least one processing block; determining at least one largest coding unit on the basis of the determined order; and decoding the determined at least one largest coding unit, wherein the order is one of a plurality of orders for determining a largest coding unit.
    Type: Application
    Filed: July 27, 2020
    Publication date: November 12, 2020
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ki-ho CHOI, Min-woo PARK, Elena ALSHINA, Chan-yul KIM, In-kwon CHOI
  • Publication number: 20200359013
    Abstract: Provided is a method of decoding a video according to an embodiment, the method including determining at least one processing block for splitting the video; determining an order of determining at least one largest coding unit in the at least one processing block; determining at least one largest coding unit on the basis of the determined order; and decoding the determined at least one largest coding unit, wherein the order is one of a plurality of orders for determining a largest coding unit.
    Type: Application
    Filed: July 27, 2020
    Publication date: November 12, 2020
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ki-ho CHOI, Min-woo PARK, Elena ALSHINA, Chan-yul KIM, In-kwon CHOI
  • Publication number: 20200359014
    Abstract: Provided is a video decoding method including obtaining, from a bitstream, split information indicating whether a current block is to be split; when the split information does not indicate that the current block is to be split, decoding the current block according to encoding information about the current block; and when the split information indicates that the current block is to be split, splitting the current block into at least two lower blocks, obtaining encoding order information indicating an encoding order of the at least two lower blocks of the current block from the bitstream, determining a decoding order of the at least two lower blocks according to the encoding order information, and decoding the at least two lower blocks according to the decoding order.
    Type: Application
    Filed: July 28, 2020
    Publication date: November 12, 2020
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yin-ji PIAO, Jie CHEN, Chan-yul KIM
  • Publication number: 20200359015
    Abstract: Provided is a video decoding method including obtaining, from a bitstream, split information indicating whether a current block is to be split; when the split information does not indicate that the current block is to be split, decoding the current block according to encoding information about the current block; and when the split information indicates that the current block is to be split, splitting the current block into at least two lower blocks, obtaining encoding order information indicating an encoding order of the at least two lower blocks of the current block from the bitstream, determining a decoding order of the at least two lower blocks according to the encoding order information, and decoding the at least two lower blocks according to the decoding order.
    Type: Application
    Filed: July 28, 2020
    Publication date: November 12, 2020
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yin-ji PIAO, Jie CHEN, Chan-yul KIM
  • Publication number: 20200359016
    Abstract: Innovations in intra-picture prediction with multiple candidate reference lines available are described herein. For example, intra-picture prediction for a current block uses a non-adjacent reference line of sample values to predict the sample values of the current block. This can improve the effectiveness of the intra-picture prediction when the reference line of sample values that is adjacent the current block includes significant capture noise, significant quantization error, or significantly different values (compared to the current block) due to an occlusion.
    Type: Application
    Filed: July 29, 2020
    Publication date: November 12, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Bin Li, Jizheng Xu, Jiahao Li
  • Publication number: 20200359017
    Abstract: Innovations in intra-picture prediction with multiple candidate reference lines available are described herein. For example, intra-picture prediction for a current block uses a non-adjacent reference line of sample values to predict the sample values of the current block. This can improve the effectiveness of the intra-picture prediction when the reference line of sample values that is adjacent the current block includes significant capture noise, significant quantization error, or significantly different values (compared to the current block) due to an occlusion.
    Type: Application
    Filed: July 29, 2020
    Publication date: November 12, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Bin Li, Jizheng Xu, Jiahao Li
  • Publication number: 20200359018
    Abstract: Innovations in intra-picture prediction with multiple candidate reference lines available are described herein. For example, intra-picture prediction for a current block uses a non-adjacent reference line of sample values to predict the sample values of the current block. This can improve the effectiveness of the intra-picture prediction when the reference line of sample values that is adjacent the current block includes significant capture noise, significant quantization error, or significantly different values (compared to the current block) due to an occlusion.
    Type: Application
    Filed: July 29, 2020
    Publication date: November 12, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Bin Li, Jizheng Xu, Jiahao Li
  • Publication number: 20200359019
    Abstract: Embodiments of the disclosure provide a method and apparatus for processing a video signal. Particularly, a method for decoding a video signal according to an embodiment of the disclosure may include: determining, among predefined secondary transform sets based on intra-prediction modes of a current block, a secondary transform set applied to the current block; obtaining a first syntax element indicating a secondary transform matrix applied to the current block in the determined secondary transform set; deriving a secondary inverse-transformed block by performing a secondary inverse transform on a left top region of the current block by using the secondary transform matrix specified by the first syntax element; and deriving a residual block of the current block by performing a primary inverse transform on the secondary inverse-transformed block using a primary transform matrix of the current block.
    Type: Application
    Filed: June 10, 2020
    Publication date: November 12, 2020
    Inventors: Moonmo KOO, Mehdi SALEHIFAR, Seunghwan KIM, Jaehyun LIM
  • Publication number: 20200359020
    Abstract: The invention relates to a method for encoding/decoding an image. The image decoding method according to the invention comprises the steps of: obtaining diagonal partition information on a current block; determining a diagonal partition structure of the current block using the diagonal partition information; and diagonally partitioning the current block into a first and a second area based on the determined diagonal partition structure, the current block being a leaf node of a square or rectangular partition.
    Type: Application
    Filed: October 16, 2018
    Publication date: November 12, 2020
    Inventors: Yong Jo AHN, Ho Chan RYU
  • Publication number: 20200359021
    Abstract: An encoder includes circuitry and memory coupled to the circuitry. The circuitry in operation: determines whether the shape of a current chroma block to be split satisfies a first condition; generates one or more second candidates for a block partitioning method by eliminating one or more predetermined candidates from a plurality of first candidates for a block partitioning method when the current chroma block satisfies the first condition; selects a block partitioning method from among the one or more second candidates; and splits the current chroma block according to the block partitioning method selected.
    Type: Application
    Filed: July 29, 2020
    Publication date: November 12, 2020
    Inventors: Ryuichi KANOH, Tadamasa TOMA, Kiyofumi ABE, Takahiro NISHI
  • Publication number: 20200359022
    Abstract: Provided is an encoder which includes circuitry and memory. Using the memory, the circuitry splits an image block into a plurality of partitions, obtains a prediction image for a partition, and encodes the image block using the prediction image. When the partition is not a non-rectangular partition, the circuitry obtains (i) a first prediction image for the partition, (ii) a gradient image for the first prediction image, and (iii) a second prediction image as the prediction image using the first prediction image and the gradient image. When the partition is a non-rectangular partition, the circuitry obtains the first prediction image as the prediction image without using the gradient image.
    Type: Application
    Filed: July 29, 2020
    Publication date: November 12, 2020
    Inventors: Kiyofumi ABE, Takahiro NISHI, Tadamasa TOMA, Ryuichi KANOH, Chong Soon LIM, Ru Ling LIAO, Hai Wei SUN, Sughosh Pavan SHASHIDHAR, Han Boon TEO, Jing Ya LI
  • Publication number: 20200359023
    Abstract: Provided is an encoder which includes circuitry and memory. The circuitry encodes an image block using the memory. In encoding the image block, the circuitry: obtains one or more size parameters related to a size of the image block; determines whether the one or more size parameters and one or more thresholds satisfy a determined relationship; encodes a split parameter when the one or more size parameters and the one or more thresholds are determined to satisfy the determined relationship, the split parameter indicating whether the image block is to be split into a plurality of partitions including a non-rectangular partition; and encodes the image block after splitting the image block into the plurality of partitions when the split parameter indicates that the image block is to be split into the plurality of partitions.
    Type: Application
    Filed: July 29, 2020
    Publication date: November 12, 2020
    Inventors: Kiyofumi ABE, Takahiro NISHI, Tadamasa TOMA, Ryuichi KANOH, Chong Soon LIM, Ru Ling LIAO, Hai Wei SUN, Sughosh Pavan SHASHIDHAR, Han Boon TEO, Jing Ya LI
  • Publication number: 20200359024
    Abstract: This disclosure relates to a method of coding of video data and more particularly to techniques for deriving quantization parameters. The method of coding of video data comprises: determining a predictive quantization parameter for a current video block based at least in part on a quantization parameter associated with a reference video block, a partitioning used to generate the reference video block, and a partitioning used to generate the current video block; and generating a quantization parameter for the current video block based at least in part on the determined predictive quantization parameter.
    Type: Application
    Filed: January 29, 2019
    Publication date: November 12, 2020
    Inventors: Kiran Mukesh MISRA, Christopher Andrew SEGALL, Jie ZHAO, Weijia ZHU
  • Publication number: 20200359025
    Abstract: The present embodiments relate to a method and an apparatus for efficiently encoding and decoding video using multiple transforms. For example, a horizontal transform or a vertical transform may be selected from a set of transforms to transform prediction residuals of a current block of a video picture being encoded. In one example, the set of transforms includes: 1) only one transform with a constant lowest frequency basis function, 2) one or more transform with an increasing lowest frequency basis function, and 3) only one transform with a decreasing lowest frequency basis function. In one embodiment, the transform with a constant lowest frequency basis function is DCT-II, the transform with an increasing lowest frequency basis function is DST-VII (and DST-IV), and the transform with a decreasing lowest frequency basis function is DCT-VIII. At the decoder side, the corresponding inverse transforms are selected.
    Type: Application
    Filed: December 19, 2018
    Publication date: November 12, 2020
    Inventors: Karam NASER, Fabrice Leleannec, Franck Galpin
  • Publication number: 20200359026
    Abstract: The quantization parameter QP is well-known in digital video compression as an indication of picture quality. Digital symbols representing a moving image are quantized with a quantizing step that is a function QSN of the quantization parameter QP, which function QSN has been normalized to the most significant bit of the bit depth of the digital symbols. As a result, the effect of a given QP is essentially independent of bit depth a particular QP value has a standard effect on image quality, regardless of bit depth. The invention is useful, for example, in encoding and decoding at different bit depths, to generate compatible, bitstreams having different bit depths, and to allow different bit depths for different components of a video signal by compressing each with the same fidelity (i.e., the same QP).
    Type: Application
    Filed: July 27, 2020
    Publication date: November 12, 2020
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Walter C. GISH, Christopher J. VOGT