Abstract: The various implementations described herein include methods, devices, and systems for displaying live and recorded video from a remote camera. In one aspect, a method includes: (1) displaying a portion of a recorded video feed from the video camera, a live video affordance, and an event history affordance; (2) in response to a selection of the live video affordance: (a) requesting and displaying a live video feed; (b) continuing to display the event history affordance; and (c) ceasing to display the live video affordance; (3) in response to receiving a user selection of the event history affordance: displaying a plurality of detected events.
Type:
Grant
Filed:
October 19, 2018
Date of Patent:
October 22, 2019
Assignee:
GOOGLE LLC
Inventors:
Jason N. Laska, Greg R. Nelson, Greg Duffy
Abstract: Systems and methods are provided for mitigating image noise. Portion motion vectors are identified for a plurality of portions of a first frame relating to a setting, the portion motion vectors indicating motion from the first frame to a second frame. A global motion vector is determined that indicates global motion spanning the first frame to the second frame. A determination is made that a first portion of the plurality of portions has a portion motion vector that is consistent with the global motion vector. Motion compensation is performed on the first frame and the second frame to align the second frame with first frame at the first portion that is consistent with the global motion vector to generate an output image that is based on both of the two frames. An inconsistent second portion of one of the frames is incorporated into the output image.
Abstract: An image coding method includes: determining a first temporal distance between a current picture to be coded and a first reference picture; determining a second temporal distance between the first reference picture and a second reference picture; judging whether or not the first temporal distance and the second temporal distance satisfy a predetermined condition, and calculating a first weight for the first reference picture and a second weight for the second reference picture based on a result of the judgment; and generating a predictive image for the current block by adding a first block included in the first reference picture and a second block included in the second reference picture, the first block being weighted by the first weight, and the second block being weighted by the second weight.
Abstract: Panoramic three-dimensional (3D) imaging systems for capturing stereoscopic video images and presenting 3D video images to a viewer based on the captured stereoscopic video images are disclosed. In some embodiments, the described systems use a proposed parallax vector to encode captured stereoscopic video images in real-time when the raw video images are being captured. During panoramic video playback, the computed parallax vectors are used to reproduce stereoscopic video images based on the angle of view of an audience relative to a display screen before outputting the stereoscopic video images to the left and right eyes of the audience to create the realistic 3D experience.
Abstract: An electronic device for sending a message is described. The electronic device includes a processor and instructions stored in memory that is in electronic communication with the processor. The electronic device determines whether to include a common decoding unit CPB removal delay parameter in a picture timing Supplemental Enhancement Information (SEI) message. The electronic device also generates either a common decoding unit CPB removal delay parameter or a separate decoding unit CPB removal delay parameter for each decoding unit in the access unit. The electronic device also sends the picture timing SET message with the common decoding unit CPB removal delay parameter or the decoding unit CPB removal delay parameters.
Abstract: Line scanning of a radiated and defined strip pattern (2) facilitates determining a position of an edge of each strip (5) by taking images of the line pattern (2) from a different viewing angle and determining an elevation profile along an image line (3) therefrom and by concatenating determining the 3-D elevation profile over a surface (1).
Abstract: Techniques and mechanisms for determining a level of accommodation to be provided by an eye-mountable device (EMD) for a user viewing a 3D stereoscopic display. In one embodiment, the EMD in disposed in or on an eye of the user, and an angle of vergence between the user's eyes is detected. Based on the angle of vergence, the EMD provides a level of accommodation that results in the user having a blurred viewing of an object in the stereoscopic display. The blur induces the user to change the accommodation provided by the eye on which, or in which, the EMD is disposed. Inducing the user to perform such a change in the eye's accommodation more closely approximates what the user would do when viewing real world physical objects. This tends to result in a better viewing experience by the user. In another embodiment, the angle of vergence is detected based on exposure of the EMD to a magnetic field.
Abstract: An image processing apparatus for processing an image according to one embodiment includes a generating unit, a display control unit, and a change receiving unit. The display control unit generates a synthetic image providing a view of a vehicle from a virtual viewpoint, based on a plurality of onboard camera images. The display control unit displays the generated synthetic image on a display unit. The change receiving unit receives a change in the relative positional relation between an image region that is based on one of the camera images and image regions that are based on the other camera images in the synthetic image. The generating unit generates a synthetic image, based on a changed positional relation every time a change in the positional relation is received.
Abstract: Techniques and systems are described for mapping 360-degree video data to a truncated square pyramid shape. A 360-degree video frame can include 360-degrees' worth of pixel data, and thus be spherical in shape. By mapping the spherical video data to the planes provided by a truncated square pyramid, the total size of the 360-degree video frame can be reduced. The planes of the truncated square pyramid can be oriented such that the base of the truncated square pyramid represents a front view and the top of the truncated square pyramid represents a back view. In this way, the front view can be captured at full resolution, the back view can be captured at reduced resolution, and the left, right, up, and bottom views can be captured at decreasing resolutions. Frame packing structures can also be defined for 360-degree video data that has been mapped to a truncated square pyramid shape.
Type:
Grant
Filed:
August 31, 2016
Date of Patent:
June 11, 2019
Assignee:
Qualcomm Incorporated
Inventors:
Geert Van der Auwera, Muhammed Coban, Marta Karczewicz
Abstract: A method of depth map coding for a three-dimensional video coding system incorporating consistent texture merging candidate is disclosed. According to the first embodiment, the current depth block will only inherit the motion information of the collocated texture block if one reference depth picture has the same POC (picture order count) and ViewId (view identifier) as the reference texture picture of the collocated texture block. In another embodiment, the encoder assigns the same total number of reference pictures for both the depth component and the collocated texture component for each reference list. Furthermore, the POC (picture order count) and the ViewId (view identifier) for both the depth image unit and the texture image unit are assigned to be the same for each reference list and for each reference picture.
Abstract: Certain aspects of the present technology involve automated capture of several image frames (e.g., simultaneously in a single exposure, or in a burst of exposures), and application of a data-extraction process (e.g., watermark decoding) to each such image. Other aspects of the technology involve capturing a single scene at two different resolutions, and submitting imagery at both resolutions for watermark decoding. Still other aspects of the technology involve increasing the signal-to-noise ratio of a watermark signal by subtracting one image from another. Yet other aspects of the technology involve receiving focus distance data from a camera, and employing such data in extracting information from camera imagery. Smartphone camera APIs can be employed to simplify implementation of such methods. A great number of features and arrangements are also detailed.
Abstract: A solution is provided to estimate motion vectors of a video. A multistage motion vector prediction engine is configured to estimate multiple best block-matching motion vectors for each block in each video frame of the video. For each stage of the motion vector estimation for a block of a video frame, the prediction engine selects a test vector form a predictor set of test vectors, computes a rate-distortion optimization (RDO) based metric for the selected test vector, and selects a subset of test vectors as individual best matched motion vectors based on the RDO based metric. The selected individual best matched motion vectors are compared and a total best matched motion vector is selected based on the comparison. The prediction engine selects iteratively applies one or more global matching criteria to the selected best matched motion vector to select a best matched motion vector for the block of pixels.
Abstract: Example techniques are described to determine transforms to be used during video encoding and video decoding. A video encoder and a video decoder may select transform subsets that each identify one or more candidate transforms. The video encoder and the video decoder may determine transforms from the selected transform subsets.
Type:
Grant
Filed:
January 25, 2016
Date of Patent:
May 28, 2019
Assignee:
QUALCOMM Incorporated
Inventors:
Xin Zhao, Sungwon Lee, Jianle Chen, Li Zhang, Xiang Li, Ying Chen, Marta Karczewicz, Hongbin Liu
Abstract: A P frame-based multi-hypothesis motion compensation method includes: taking an encoded image block adjacent to a current image block as a reference image block and obtaining a first motion vector of the current image block by using a motion vector of the reference image block, the first motion vector pointing to a first prediction block; taking the first motion vector as a reference value and performing joint motion estimation on the current image block to obtain a second motion vector of the current image block, the second motion vector pointing to a second prediction block; and performing weighted averaging on the first prediction block and the second prediction block to obtain a final prediction block of the current image block. The method increases the accuracy of the obtained prediction block of the current image block without increasing the code rate.
Type:
Grant
Filed:
January 26, 2016
Date of Patent:
May 21, 2019
Assignee:
PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
Abstract: A smart home security device includes a security device, which is provided, in the interior thereof, with multiple modules connected with a main control board. The main control board allows an insertion trough provided in the security device to be used in combination with a corresponding expansion module in order to acquire detection signals concerning a body temperature of a target body and temperature and humidity in a manner of interference-reduced manner with accuracy and signal transmission speed increased. An infrared control module provided in the security device is operable to drive a related electrical appliance to have a family member well taken care of for smart home caring.
Abstract: A multi-camera vision system of a vehicle includes at least four cameras disposed at a vehicle and having respective fields of view exterior of the vehicle. Each of the cameras is operable to capture image data representative of the respective field of view. Captured image data is compressed at the respective camera and the compressed image data is communicated to a control unit. The control unit includes an image processor operable to process image data. The image processor processes image data frame by frame using an indirect context model and without time wise dependency.
Abstract: An apparatus comprising a first detection device, a second detection device and a processing circuit. The first detection device may be configured to generate a first signal in response to a first type of input. The second detection device may be configured to generate a second signal in response to a second type of input. The processing circuit may be configured to (i) determine whether the first signal is a known type of signal, (ii) determine whether the second signal is a known type of signal and (iii) generate a warning signal in response to the first signal and the second signal.
Type:
Grant
Filed:
October 21, 2014
Date of Patent:
May 14, 2019
Assignee:
KUNA SYSTEMS CORPORATION
Inventors:
Sai-Wai Fu, Haomiao Huang, Harold G. Sampson
Abstract: Provided is an apparatus and method for reproducing a three-dimensional (3D) image. A 3D image reproduction apparatus may include an image selector configured to select a left-eye image and a right-eye image based on a broadcast standard of a broadcast stream, and an image outputter configured to output the 3D image by synthesizing and output the left-eye image and the right-eye image in a 3D image format.
Type:
Grant
Filed:
June 27, 2014
Date of Patent:
May 7, 2019
Assignees:
Electronics and Telecommunications Research Institute, Hidea Solutions Co., Ltd., Kookmin University Industry Academy Cooperation Foundation
Inventors:
Joo Young Lee, Sung Hoon Kim, Se Yoon Jeong, Jin Soo Choi, Jin Woong Kim, Suk Jin Hong, Jin Suk Kwak, Dong Wook Kang, Kyeong Hoon Jung
Abstract: Provided are methods and apparatus for encoding and decoding motion information. The method of encoding motion information includes: obtaining a motion information candidate by using motion information of prediction units that are temporally or spatially related to a current prediction unit; adding, when the number of motion information included in the motion information candidate is smaller than a predetermined number n, alternative motion information to the motion information candidate so that the number of motion information included in the motion information candidate reaches the predetermined number n; determining motion information with respect to the current prediction unit from among the n motion information candidates; and encoding index information indicating the determined motion information as motion information of the current prediction unit.
Abstract: A method for tracking trackable objects using a medical tracking device, the tracking device being a camera or an EM transmitter, during a medical workflow comprising a plurality of workflow steps, wherein each trackable object has at least one marker and the method comprises the steps of: —acquiring a set of camera positions, wherein each tracking device position is associated with at least one workflow step; —identifying a workflow; —sequentially and automatically moving the tracking device to the camera positions associated with the workflow steps; and—performing a tracking step only when the tracking device is in a fixed position.
Type:
Grant
Filed:
January 12, 2012
Date of Patent:
March 19, 2019
Assignee:
Brainlab AG
Inventors:
Robert Schmidt, Johannes Manus, Fritz Vollmer