Abstract: The invention relates to methods for hiding values of a hierarchically layered coding unit in other values comprised by the coding unit is provided (encoding methods). Furthermore, the invention also relates to methods for reconstructing hidden data from an encoded coding unit (decoding method). The invention is also related to the implementation of these encoding and/or decoding methods in an apparatus and on a (non-transitory) computer readable medium. According to the invention, data are hidden in values of different layers of a hierarchically structured coding unit.
Abstract: A method of providing advertising from a central database server connected to a global computer network to distributed sites via interactive television. A representative icon is presented to a subscriber on a television screen indicating an advertisement. When the icon is selected, an advertisement information detail is retrieved from storage in a local memory, or from the central database server and presented to the subscriber on the television screen.
Type:
Grant
Filed:
September 28, 2016
Date of Patent:
August 27, 2019
Assignee:
YOUR CHOICE INTERACTIVE, INC.
Inventors:
Peter M. Redling, Jackie Skipper Barrios
Abstract: Provided is an apparatus for compensating an image distortion and an apparatus for compensating an image distortion according to exemplary embodiments of the present invention, which compensates distortion of an image including a plurality of image division units includes: a compensation rate setting unit setting a variable distortion compensation rate so that the plurality of respective image division units are compensated at different ratios; and a compensation unit compensating the plurality of image division units according to the variable distortion compensation rate set by the compensation rate setting unit.
Abstract: The present disclosure generally relates to displaying visual effects in image data. In some examples, visual effects include an avatar displayed on a user's face. In some examples, visual effects include stickers applied to image data. In some examples, visual effects include screen effects. In some examples, visual effects are modified based on depth data in the image data.
Type:
Grant
Filed:
August 23, 2018
Date of Patent:
August 6, 2019
Assignee:
Apple Inc.
Inventors:
Marcel Van Os, Jessica Aboukasm, David R. Black, Robert Chinn, Gregory L. Dudey, Katherine K. Ernst, Grant Paul, William A. Sorrentino, III, Brian E. Walsh, Jean-Francois Albouze, Jae Woo Chang, Aurelio Guzman, Christopher J. Moulios, Joanna M. Newman, Nicolas Scapel, Joseph-Alexander P. Weil, Christopher Wilson
Abstract: An image control unit of an image display device provides, on display screens of a display unit, a captured-image-only region in which only a captured image is displayed when the display unit displays the captured image, and multiple-image regions in which the captured image or a different image that is different from the captured image is selectively displayed when the display unit displays the captured image. In an embodiment, the image control unit fixes a position of the captured-image-only region on the display screens.
Abstract: An image processing device includes a division unit which divides an input image into a first content area and a first background area, a generation unit which corrects a first content area pixel included in the first content area by using a first peripheral pixel existing in the vicinity of the first content area among first background area pixels included in the first background area to generate a first background image formed with the after-correction first content area pixel and the first background area pixel, and a removal unit which removes the first background image from the input image.
Abstract: A computer system and method providing for viewing and switching of audio-video data. The system comprises: a plurality of audio/video sources containing information referring to an event; a streaming server, streaming the contents of a first audio signal and a first video signal from the audio and video sources to a user; a feed distributor controllably feeding the first audio signal and first video signal to the streaming server; and a user-operated control unit communicating with the feed distributor and controlling operation of the feed distributor, so as to instruct the feed distributor to switch between audio or video. Switching between audio signals occurs without altering the video signals and switching between video signals occurs without altering the audio signals.
Type:
Grant
Filed:
January 12, 2016
Date of Patent:
May 28, 2019
Inventors:
Flippo Costanzo, Saverio Roncolini, Antonio Rossi
Abstract: A system and method of interacting with a virtual object in a virtual environment using physical movement. The virtual scene contains a 3D object that appears to extend forward or above the plane of the display. A sensor array is provided that monitors an area proximate the display. The sensor array can detect the presence and position of an object that enters the area. Action points are programmed in, on, or near the virtual objects. Each action point has virtual coordinates in said virtual scene that correspond to real coordinates within the monitored area. Subroutines are activated when the sensor array detects an object that moves to real coordinates that correspond to the virtual coordinates of the action points.
Abstract: A system and/or method configured to determine a sample frame order for analyzing a video. The video may have multiple frames ordered in a sequence from a beginning to an end. A first sample frame order for analyzing the video may be determined. Determining the first sample frame order may include determining an initial frame for a first iteration, and determining secondary frames for a second iteration. Determining the initial frame and the secondary frames may be based on a function of frame position in the sequence of frames. The initial frame may be associated with a first sample position, and the secondary frames may be associated with secondary sample positions in the sample frame order. A first feature of the video may be determined based on an analysis of the frames in the video performed on the frames in the first sample frame order.
Type:
Grant
Filed:
September 21, 2016
Date of Patent:
May 7, 2019
Assignee:
GoPro, Inc.
Inventors:
Jonathan Wills, Daniel Tse, Desmond Chik
Abstract: A method and apparatus for providing additional information included in a video displayed on a display device using visible light communication (VLC). A data packet including video data and additional information for an object included in the video data is received. The video data is extracted from the data packet, and the video data is decoded. The additional information from the data packet is extracted, and the additional information decoded. The decoded video data is output through the display device, and at the same time, the additional information is transmitted for a particular object included in a video based on a VLC protocol using a light emitting device prepared in the display device. The additional information providing apparatus includes an image sensor module, a display module, a visible light receiving module, an additional information manager and a controller.
Type:
Grant
Filed:
January 16, 2015
Date of Patent:
April 23, 2019
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Jong-Hoon Ann, Eun-Tae Won, Jae-Seung Son
Abstract: System and/or method configured to determine a sample frame order for analyzing a video. The video may have multiple frames ordered in a sequence from a beginning to an end. Segments of the video may be determined based on an analysis of the frames of the video. A first sample frame order for analyzing the video may be determined based on multiple iterations performed on individual ones of the segments of the video. Determining the first sample frame order may include determining initial frames for a first iteration, and determining secondary frames for a second iteration based on a function of frame position in the sequence of frames. The initial frames and the secondary frames may be associated with sample positions in the sample frame order. A first feature of the video may be determined based on an analysis of the frames performed in the first sample frame order.
Type:
Grant
Filed:
September 21, 2016
Date of Patent:
April 23, 2019
Assignee:
GoPro, Inc.
Inventors:
Jonathan Wills, Daniel Tse, Desmond Chik
Abstract: A method is provided for delivering targeted advertisements into a QAM or IP stream that provides accurate synchronization. The method includes synchronizing the internal content of a IP stream delivering video with an advertisement (ad) stream by providing content information in the IP stream and the network stream, the content information including positional information and/or referential information, wherein the positional information is inserted at key locations identified with a PTS value, and wherein the referential information refers to a position in the IP stream or the ad stream, the referential information including either a PTS value or a frame count.
Abstract: Systems and methods for creating and distributing professional quality pictorial souvenirs giving the illusion that guests of a facility were imaged at other locations, including making initial arrangements with guests, showing selections of background scene images at video displays and flashing chrome key images interspersed with frames of the scene images, taking key guest images in synch with the flashed key images, extracting guest image content from the key guest images and merging into selecting scene images, showing preview merges images for guest selection and providing souvenir portfolios that include merged images to guests or designees after making financial arrangements, including for payment to third parties for copyright content included in the souvenirs and with advertisers for promotional material included in the souvenirs.
Abstract: The content reproduction method includes receiving a select signal for selecting one or more pieces of content; and reproducing the selected pieces of content and one or more pieces of content which were generated or reproduced together with the selected pieces of content in a temporal space within a range.
Abstract: Techniques and devices for creating a Forward-Reverse Loop output video and other output video variations. A pipeline may include obtaining input video and determining a start frame within the input video and a frame length parameter based on a temporal discontinuity minimization. The selected start frame and the frame length parameter may provide a reversal point within the Forward-Reverse Loop output video. The Forward-Reverse Loop output video may include a forward segment that begins at the start frame and ends at the reversal point and a reverse segment that starts after the reversal point and plays back one or more frames in the forward segment in a reverse order. The pipeline for the generating Forward-Reverse Loop output video may be part of a shared resource architecture that generates other types of output video variations, such as AutoLoop output videos and Long Exposure output videos.
Type:
Grant
Filed:
August 16, 2017
Date of Patent:
January 8, 2019
Assignee:
Apple Inc.
Inventors:
Arwen V. Bradley, Jason Klivington, Rudolph van der Merwe, Douglas P. Mitchell, Amir Hoffnung, Behkish J. Manzari, Charles A. Mezak, Matan Stauber, Ran Margolin, Etienne Guerard, Piotr Stanczyk
Abstract: Disclosed are a system and a method for managing a set of videos originating from a camera setup having a plurality of cameras. The system and method provides a mesh of graphical elements superposed with an active video at a display of a user device. The graphical elements are arranged on a virtual surface representing positions of the cameras in a co-ordinate system. The active video originates from the at least one camera, which at least one camera is associated with the graphical element located in the middle portion of the display. In addition, the present disclosure enables correlating relative position, recording direction and order of the multiple cameras for providing multiple viewing positions, for example, on a user interface.
Type:
Grant
Filed:
January 19, 2016
Date of Patent:
January 8, 2019
Assignee:
Oy Vulcan Vision Corporation
Inventors:
Jussi Hyttinen, Mikko Välimäki, Hannu Eronen, Asko Roine
Abstract: Methods and apparatuses are disclosed for processing a plurality of captured image frames. An example method may include receiving a first image frame and a second image frame from a camera sensor. The example method may also include determining a first portion of the second image frame with a temporal difference from a corresponding first portion of the first image frame, wherein a second portion of the second image frame is without a temporal difference from a corresponding second portion of the first image frame. The example method may also include processing the second image frame, including processing the first portion of the second image frame, and preventing processing the second portion of the second image frame.
Abstract: Enterprise communication display systems enable life-like images for videoconferencing and entertainment productions. Life-like images appear in a 3D environment where imaged people are visible through specially configured see-through displays. Imaged people can be viewed amongst a reflected foreground. Methods for enterprise-wide deployments for corporate, healthcare, education, and government communications, including hotel properties and a property management system are shown. Direct projection see-through screen configurations are created that eliminate unwanted secondary images in the room, conceal exposed projector lenses, reduce lens flare, makes practical multi-use room installations, images conferees among a room environment, enables touch screen interactivity, and utilizes extreme short throw projectors to reduce cost and bulk of common throw projectors. Further, a multi-format VR/AR production system is disclosed enabling several formats to be created simultaneously.
Abstract: Various exemplary embodiments related to an electronic apparatus and a method for taking a photograph in the electronic apparatus are disclosed, and according to an exemplary embodiment, the electronic apparatus may include a display that displays a screen; a depth sensor that outputs a first image signal and depth information; an image sensor that outputs a second image signal; and a control unit that controls to display a preview screen on the display using the first image signal, obtain both depth information of a photographing moment and an image of the photographing moment using the second image signal in response to a request of photographing, and store the image and the depth information. Also, other various exemplary embodiments may be possible.
Abstract: A wide-angle camera emulating a PTZ camera via image data processing is used to generate a panoramic image of multiple regions for ease of viewing. A client can specify multiple regions for extraction from the panoramic image to stream to a separate server for further image processing and analysis.
Abstract: A subject (10), such as a billboard, has a filtering film (15) to absorb electromagnetic radiation specifically in a first wavelength band. A detector (60) provides a first detector signal (61a) relating to the first wavelength band and a second detector signal (61b) relating to another, different, second wavelength band, respectively. Suitably, the subject (10) appears with high intensity in one band and with low intensity in the other. A content replacement unit (40) produces a mask signal (43) by identifying regions of contrast between the first and second detector signals (61a, 61b) as target areas (75). A content substitution unit (47) selectively replaces the target areas (75) with alternate image content (42) to generate modified video images (72). The system is useful, for example, to generate multiple live television broadcasts each having differing billboard advertisements.
Abstract: Concepts and technologies are described herein for objectizing and animating images. In accordance with the concepts and technologies disclosed herein, a presentation program is configured to import an image, to analyze the image and/or data describing the image, and to identify entities within the image. The presentation program creates objects corresponding to the identified entities, and program presents the identified entities and/or the created objects via a user interface. The presentation program also can be configured to present one or more user interfaces via which a user selects entities and/or objects and specifies or controls animations of the selected entities or objects.
Abstract: An image reading apparatus includes an imaging unit, an image data analysis unit, and an image combination unit. The imaging unit images the document image multiple times from mutually differing angles to generate a plurality of data images each representing the document image. The image data analysis unit performs matching on the plurality of data images by matching the plurality of data images, on a per-region basis in each of the plurality of images of the document represented by the plurality of data images, so as to obtain per-region brightnesses for the plurality of data images, and comparing the obtained brightnesses among the plurality of data images to select from among the plurality of data images each data image whose region is comparatively brighter. The image compositing unit uses the data images selected on the per-region basis as comparatively bright to generate a composite data image.
Abstract: A broadcast control apparatus for visual data includes a touch screen (32, 34) display panel operable to receive and display visual data simultaneously in real time from a plurality of visual sources. It also includes a touch screen graphical panel for the retrieval of control functions from a control function register. The visual data from at least one of the visual sources is selectable for use by finger pressure on the associated portion of the touch screen (32, 34) display panel and the selected data is modifiable in accordance with the retrieved control function.
Type:
Grant
Filed:
June 17, 2015
Date of Patent:
July 3, 2018
Assignee:
GRASS VALLEY CANADA
Inventors:
Mark Stoneham, David Griggs, David Sabine, Matthew Caves, Graham Broadbridge, Michael Reznik, Colin Grealy, Craig Morrison, Thomas Barnett, Christopher McMillan
Abstract: Parallel video effects, mix trees, and related methods are disclosed. Video data inputs are mixed in parallel according to a mix parameter signal associated with one of the video data inputs. A resultant parallel mixed video data output is further mixed with a further video data input according to a composite mix parameter signal, which corresponds to a product of mix parameter signals that are based on mix parameter signals respectively associated with multiple video data inputs. The mix parameter signals could be alpha signals, in which case the composite mix parameter signal could correspond to a product of complementary alpha signals that are complements of the alpha signals. Composite mix parameter signals and mix results could be truncated based on a number of levels in a multi-level mix tree and an error or error tolerance. Rounding could be applied to a final mix output or an intermediate mix result.
Abstract: In current systems, augmented reality graphics is generated at a central broadcast facility or studio where it is combined with the video that is transmitted to subscribers. By contrast, in the described system, the studio does not generate the graphics, but transmits video together with real-time metadata to the end-user set-top device. The end-user device generates the augmented reality graphics, using the metadata to determine positional and other parameters for displaying the graphics. Shifting the generation of augmented reality graphics to the consumer level facilitates end-user customization and individualized targeting of information by a broadcaster or advertiser.
Type:
Grant
Filed:
September 1, 2016
Date of Patent:
May 8, 2018
Assignee:
Avid Technology, Inc.
Inventors:
Andrzej Wojdala, Piotr A. Borys, Ofir Benovici, Tomer Sela
Abstract: In a method of setting an OSD function according to a related technique, it is not allowed to select various superimposing techniques. In a case where a monitoring camera supports a plurality of superimposing techniques, a user is supposed to select one of the superimposing techniques via a troublesome operation. In view of the above, an image pickup apparatus is provided which includes a reception unit configured to receive an acquisition request for information associated with an image superimposing method of the image pickup apparatus, and a transmission unit configured to, in a case where the acquisition request for the information associated with the image superimposing method of the image pickup apparatus is received by the reception unit, transmit information associated with the image superimposing method of the image pickup apparatus.
Abstract: Systems and methods for creating and distributing professional quality pictorial souvenirs giving the illusion that guests of a facility were imaged at other locations, including making initial arrangements with guests, showing selections of background scene images at video displays and flashing chroma key images interspersed with frames of the scene images, taking key guest images in synch with the flashed key images, extracting guest image content from the key guest images and merging into selecting scene images, showing preview merges images for guest selection and providing souvenir portfolios that include merged images to guests or designees after making financial arrangements, including for payment to third parties for copyright content included in the souvenirs and with advertisers for promotional material included in the souvenirs.
Abstract: The invention is a system for generating a dynamic three-dimensional model of a space, comprising a camera module (100) comprising an optical sensor adapted for recording image information of the space, and a depth sensor adapted for recording depth information of the space, and a modelling module (300) adapted for generating a dynamic three-dimensional model of the space on the basis of the image information and the depth information. In the system according to the invention the image information with the optical sensor and the depth information with the depth sensor are recorded at a plurality of discrete times. The system according to the invention comprises a synchronisation signal generating module determining synchronised with each other the discrete times associated with the image information and the discrete times associated with the depth information.
Type:
Grant
Filed:
July 29, 2013
Date of Patent:
April 17, 2018
Assignee:
Zinemath Zrt.
Inventors:
Norbert Komenczi, Balazs Oroszi, Gergely Balazs Soos
Abstract: Techniques and devices for creating an AutoLoop output video by adding synthetic camera motion to the AutoLoop output video. The AutoLoop output video is created from a set of frames. After generating the AutoLoop output video based on a plurality of loop parameters and at least a portion of the frames, synthetic camera motion is combined with the AutoLoop output video. The synthetic camera loop is based on the subset of the input frames and exhibits some amount of camera motion for the subset of the input frames. Once the synthetic camera loop is generated, the synthetic camera loop and the video loop is combined to enhance the AutoLoop output video.
Type:
Grant
Filed:
September 23, 2016
Date of Patent:
April 3, 2018
Assignee:
Apple Inc.
Inventors:
Arwen V. Bradley, Samuel G. Noble, Rudolph van der Merwe, Jason Klivington, Douglas P. Mitchell, Duncan Robert Kerr
Abstract: Systems and methods are disclosed for providing composite content, such as video content, by compositing. A template content may be received and may have one or more color blocks. One or more drop-in content sets may each have one or more drop-in content that correspond to the one or more colors of the template content color blocks. The one or more color blocks of the template content may be replaced with its corresponding drop-in content from one of the one or more drop-in content sets to generate a composite content. Furthermore, the composite content may have demographic, geographic, and/or behavioral parameters associated with it to enable targeting the composite content, such as in the form of advertisements and/or product or service recommendations, to one or more users.
Type:
Grant
Filed:
March 20, 2014
Date of Patent:
February 13, 2018
Assignee:
AMAZON TECHNOLOGIES, INC.
Inventors:
Simon Lloyd Spencer, Brian Fergus Burns, Reginald Jassal, Martin Christopher Hare Robertson, Alistair Francis Smith, David Neil Turner, Guy Adam Taylor
Abstract: An image processing device acquires first image data representing a first image and second image data representing a second image. The first image shows a part of a target object and the second image showing another part of the target object. The first image includes a first edge. The image processing device generates a plurality of first pixels so as to be arranged outside the first image and along the first edge by using a plurality of pixels that are arranged inside the first image and along the first edge. The image processing device determines a relative position between the first image and the second image by using the plurality of first pixels. The image processing device generates arranged image data representing an arranged image in which the first image and the second image are arranged according to the relative position so that the arranged image shows the target object.
Abstract: Provided is a voltage regulator which is not affected by a variation in output impedance of a reference voltage circuit, that is, which is configured to output voltage with a small change due to temperature. Two reference voltages respectively having positive and negative temperature coefficients are added together through transconductance amplifiers having large input impedances, respectively, and the resultant is amplified.
Abstract: A modular data center build method and system including prefabricated data center modules comprised of a plurality of racks, a plurality of rack-mounted computer systems, a door, electrical systems, cooling systems, power connections, water connections, video systems, biometric access system and a fire safety system. A steel beam structure may be employed to secure multiple vertical levels of a plurality of data center modules. The described modular data center build method and system with prefabricated data center modules may be employed to quickly deploy a data center in a repeatable sustainable manner, drastically reducing the build deployment time of a data center from design to fully operational.
Abstract: A cinemagraph is generated that includes one or more video loops. A cinemagraph generator receives an input video, and semantically segments the frames to identify regions that correspond to semantic objects and the semantic object depicted in each identified region. Input time intervals are then computed for the pixels of the frames of the input video. An input time interval for a particular pixel includes a per-pixel loop period and a per-pixel start time of a loop at the particular pixel. In addition, the input time interval of a pixel is based, in part, on one or more semantic terms which keep pixels associated with the same semantic object in the same video loop. A cinemagraph is then created using the input time intervals computed for the pixels of the frames of the input video.
Type:
Grant
Filed:
September 14, 2016
Date of Patent:
October 3, 2017
Assignee:
MICROSOFT TECHNOLOGY LICENSING, LLC
Inventors:
Sing Bing Kang, Neel Suresh Joshi, Hugues Hoppe, Tae-Hyun Oh, Baoyuan Wang
Abstract: An image processing apparatus performs: determining a reference region that is a part region of a first image; calculating, for at least one of candidate regions, a degree of similarity between the reference region and each of the at least one candidate regions; identifying a corresponding region from among the plurality of candidate regions based on at least one degree of similarity; and generating combined image data by using the first image data and the second image data. The combined image data represents a combined image in which the first image is combined with the second image by overlapping the reference region with the identified corresponding region.
Abstract: Use of separate range tone mapping for combined images can help minimize loss of image information in scenes that have drastically different luminance values, i.e., scenes that have both bright and shadowed regions. Separate range tone mapping is particularly useful for combined images, such as those from spherical camera systems, which may have a higher probability of including luminance variability. The resulting increased bit depth of separate range tone mapping can make the transition between different images that make up a combined image more subtle. Each of a plurality of images that make up a combined image can use a different tone map that is optimized for the particular image data of the image. Multiple tone maps that are applied to overlapping regions of the plurality of images can subsequently be combined to expand the bit depth of the overlapping regions.
Abstract: A method, apparatus, and computer program product are described to improve a lens distortion curve which roughly approximates distortion caused by a camera lens to capture an event onto video. The present invention selects a generic lens distortion curve that roughly approximates the distortion caused by the camera lens while capturing the event onto the video. The video as well as information from the generic lens distortion curve is used to generate a camera model. This camera model is used to integrate virtual insertions into the video. If the camera model is sufficiently accurate to present a realistic appearance of the virtual insertions to the remote viewer, this camera model is then used to integrate more virtual insertions into the video. However, if the camera model is not sufficiently accurate, an iterative process is employed to refine this camera model.
Abstract: A lighting device includes: a communication unit that communicates with a wearable device with at least one camera and receives at least one image captured by the camera; a light emitting unit including one or more light emitting elements; and a controller that detects a readable medium containing a plurality of characters or visual content containing at least one color from the image and controls the light emitting unit based on the readable medium or the visual content.
Abstract: A display method is provided that reduces the probability of communication error without causing significant deterioration of picture quality. The method includes specifying, as a specified light emission period, a light emission period in which light emission is performed for greater than or equal to a time required for transmitting a block included in a visible light communication signal, out of one or more light emission periods in which light emission is performed for displaying an image included in a video signal. The method also includes transmitting the block of the visible light communication signal by luminance changing in the specified light emission period.
Type:
Grant
Filed:
December 27, 2013
Date of Patent:
May 9, 2017
Assignee:
PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
Inventors:
Mitsuaki Oshima, Koji Nakanishi, Hideki Aoyama, Koji Aoto, Akira Shiokawa, Toshiyuki Maeda, Akihiro Ueki, Takashi Suzuki
Abstract: A method and apparatus for improving quality of an enlarged image are provided. The apparatus includes first and second image input units for outputting first and second images which are obtained by capturing the same subject at different positions spaced apart by a predetermined gap, a first image processor for converting a resolution of the first image to a preview resolution, a display for displaying the first image from the first image processor, a second image processor for, when an area to be enlarged in the displayed first image is selected, cropping an area corresponding to the selected area from the second image, and a controller for controlling the display to display the cropped area on the first image in an overlaying manner. Consequently, a user may view a high-magnification image cropped from a high-definition image and an original image together.
Type:
Grant
Filed:
February 27, 2013
Date of Patent:
April 11, 2017
Assignee:
Samsung Electronics Co., Ltd
Inventors:
Jin-Hee Na, Min-Chul Kim, Jae-Sik Sohn, Young-Kwon Yoon
Abstract: A method of providing an image to be displayed includes providing captured scene data representing one or more images of a real scene and providing illumination data representing real illumination impinging on the real scene, providing a virtual reality image of a theoretical object by modeling said theoretical object using said illumination data to define illumination impinging on the theoretical object, and providing a combined image including elements of the real scene based on said captured scene data and including said virtual reality image.
Abstract: A system and method for automatically repositioning virtual and physical elements in a scene. The system and method being configured to receive a video frame, receive data, including position data, describing a first element to be imaged in the video frame, receive data, including position data, describing a second element to be imaged in the video frame, assign a dynamic status to the first element and automatically reposition at least the first element to create a modified video frame.
Abstract: Use of separate range tone mapping for combined images can help minimize loss of image information in scenes that have drastically different luminance values, i.e., scenes that have both bright and shadowed regions. Separate range tone mapping is particularly useful for combined images, such as those from spherical camera systems, which may have a higher probability of including luminance variability. The resulting increased bit depth of separate range tone mapping can make the transition between different images that make up a combined image more subtle. Each of a plurality of images that make up a combined image can use a different tone map that is optimized for the particular image data of the image. Multiple tone maps that are applied to overlapping regions of the plurality of images can subsequently be combined to expand the bit depth of the overlapping regions.
Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes receiving a current frame, the current frame including one or more macroblocks, analyzing the current frame using a first set of image characteristics to determine if logo detection can be performed on the current frame, and performing the logo detection on the current frame if the current frame satisfies the first set of image characteristics to determine presence of a logo macroblock among the one or more macroblocks.
Abstract: Disclosed are touch detection systems and methods. In one embodiment, a display panel enabling touch detection comprises a touch pane configured to allow a light including a detection light, a back inner surface, and a display generation module located between the touch pane and the back inner surface. The display generation module is configured to produce a visual display at the touch pane. The display generation module is further configured to pass the detection light received from the touch pane. The display panel also comprises a detection light receiver situated at the back inner surface configured to detect the detection light received through the display generation module for enabling a detection of touches to the touch pane of the display panel.
Abstract: A video display apparatus includes video output units to output signals representing first videos captured or received, respectively, and when a predetermined signal is input from an outside, only for a predetermined period, to output signals representing second videos having fixed images, instead of the signals representing the first videos, respectively; a display to display a video based on the signal output by one of the video output units; and a control apparatus to output the predetermined signal to the video output units, and after having output the predetermined signal, to execute switching a signal representing a video output to the display among the signals representing the videos output by the video output units, while the video output units having the signals representing the videos output to the display before and after the switching, respectively, output the signals representing the second videos, respectively.
Abstract: It is often desirable to register a first image to a second image, such as to form a panoramic image. The image registration technique discussed herein forms first and second gradients of the first and second images, respectively, then aligns phase vectors of the first and second gradients by estimating the parameters of a projective (homographic) coordinate transformation that can map the first gradient to the second gradient. The estimated parameters can be used to map the first image to the second image. In some examples, each gradient pixel includes a complex number, such as a unit vector, having a normalized amplitude and a phase vector that indicates the direction of greatest change, at that pixel, for the respective image. Aligning the image gradient phase vectors, rather than image intensity values, can align images produced under different lighting conditions, and/or produced in different wavelength regions of the electromagnetic spectrum.
Abstract: A VC (video conferencing) device for a VC system comprises a first and a second electronic device including an input module. The VC device comprises a communication module and a processing module. The communication module is electrically connected to the processing module. The communication module is communicating with the first and the second electronic device. The communication module broadcasts a background image to the second electronic device when receiving it from the first electronic device and receiving the writing image from the first and the second electronic device. When receiving the writing image, the processing module superimposes the writing image with the background image to produce a synthesized image. The communication module broadcasts the synthesized image, so that the synthesized image is displayed on the first and second electronic device simultaneously.
Abstract: A method and apparatus for selecting information from a video source to be displayed on at least a first common display screen in a collaborative workspace having a switching device, the method comprising the steps of providing a selectable control interface that includes at least one indicator that can indicate at least first and second different states, associating a video source with the switching device so that video information from the video source is presented to the switching device, when the video source is associated with the switching device, causing the at least one indicator to indicate a first state and when the selectable control interface is selected, causing the at least one indicator to indicate the second state and providing the video information from the video source to the common display screen via the switching device.
Type:
Grant
Filed:
June 2, 2014
Date of Patent:
October 11, 2016
Assignee:
Steelcase Inc.
Inventors:
Lewis Epstein, Brett Kincaid, Hyun Yoo, Suzanne Stage, Lukas Scherrer, Larry Cheng