Patents by Inventor Eugen Wige

Eugen Wige has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10805615
    Abstract: A technique for synchronizing video receivers with a video stream already in progress includes caching a key frame in a transport protocol component of a video communication system and providing the key frame on demand to any receiver attempting to join the stream and/or to rejoin the stream after an error, such as a dropped packet. Once a receiver obtains the key frame, the receiver requests that the source of the video stream issue a new sync-point frame, where the sync-point frame depends on the key frame but on no other frame for its content. The receiver may then proceed to display rendered video beginning with the sync-point frame.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: October 13, 2020
    Assignee: LogMeln, Inc.
    Inventors: Steffen Schulze, Eugen Wige
  • Patent number: 10326815
    Abstract: Techniques are provided for a source computer to generate an encoded video stream having layered sub-streams with differing bitrates while allowing a streaming server to intelligently distribute the appropriate sub-streams to recipients based on their available bandwidth. This may be accomplished by having the source computer generate and send metadata along with the encoded stream to allow the streaming server to detect which data packets belong to each sub-stream. The streaming server is then able to selectively send consistent video sub-streams at appropriate bitrates to each recipient.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: June 18, 2019
    Assignee: LogMeln, Inc.
    Inventors: Robert Chalmers, Sascha Kuemmel, Eugen Wige, Paul Elsner, Steffen Schulze
  • Patent number: 10250874
    Abstract: In a method for coding a sequence of digital images, a prediction error between predicted values and the original values of pixels is processed for generating the coded sequence of digital images. A preset prediction mode is an intra-prediction mode based on pixels of a single image and includes, for a region of pixels with reconstructed values in the single image and for a template of an image area, comparing a first patch of pixels in the region that surround a first pixel to be predicted based on the template with several second patches. A predicted value of the first pixel is determined based on the values of one or more second pixels that have the highest similarity described by the similarity measure among all second pixels of the plurality of second pixels in the region.
    Type: Grant
    Filed: July 26, 2013
    Date of Patent: April 2, 2019
    Assignee: Siemens Aktiengesellschaft
    Inventors: Peter Amon, Andreas Hutter, André Kaup, Eugen Wige
  • Patent number: 10091511
    Abstract: A technique for encoding a video signal includes generating a representative value for each block of one or more video frames by applying a predetermined function to the pixels of the respective block. To perform a block matching operation for a current block, the technique applies the predetermined function to the current block and interrogates representative values of blocks at specified locations in a spatial and/or temporal vicinity of the current block to find a matching block whose representative value matches the one generated for the current block.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: October 2, 2018
    Assignee: GetGo, Inc.
    Inventors: Eugen Wige, Steffen Schulze, Sascha Kuemmel
  • Publication number: 20180176279
    Abstract: Techniques are provided for a source computer to generate an encoded video stream having layered sub-streams with differing bitrates while allowing a streaming server to intelligently distribute the appropriate sub-streams to recipients based on their available bandwidth. This may be accomplished by having the source computer generate and send metadata along with the encoded stream to allow the streaming server to detect which data packets belong to each sub-stream. The streaming server is then able to selectively send consistent video sub-streams at appropriate bitrates to each recipient.
    Type: Application
    Filed: December 20, 2016
    Publication date: June 21, 2018
    Inventors: Robert Chalmers, Sascha Kuemmel, Eugen Wige, Paul Elsner, Steffen Schulze
  • Publication number: 20180167631
    Abstract: A technique for synchronizing video receivers with a video stream already in progress includes caching a key frame in a transport protocol component of a video communication system and providing the key frame on demand to any receiver attempting to join the stream and/or to rejoin the stream after an error, such as a dropped packet. Once a receiver obtains the key frame, the receiver requests that the source of the video stream issue a new sync-point frame, where the sync-point frame depends on the key frame but on no other frame for its content. The receiver may then proceed to display rendered video beginning with the sync-point frame.
    Type: Application
    Filed: December 14, 2016
    Publication date: June 14, 2018
    Inventors: Steffen Schulze, Eugen Wige
  • Patent number: 9837100
    Abstract: Techniques of conducting an online meeting involve outputting ambient sound to a participant of an online meeting. Along these lines, in an online meeting during which a participant wears headphones, the participant's computer receives microphone input that contains both speech from the participant and ambient sound that the participant may wish to hear. In response to receiving the microphone input, the participant's computer separates low-volume sounds from high-volume sounds. However, instead of suppressing this low-volume sound from the microphone input, the participant's computer renders this low-volume sound. In most cases, this low-volume sound represents ambient sound generated in the vicinity of the meeting participant. The participant's computer then mixes the low-volume sound with speech received from other conference participants to form output in such a way that the participant may distinguish this sound from the received speech.
    Type: Grant
    Filed: May 5, 2015
    Date of Patent: December 5, 2017
    Assignee: GetGo, Inc.
    Inventors: Eugen Wige, Klaus Reindl
  • Publication number: 20160329063
    Abstract: Techniques of conducting an online meeting involve outputting ambient sound to a participant of an online meeting. Along these lines, in an online meeting during which a participant wears headphones, the participant's computer receives microphone input that contains both speech from the participant and ambient sound that the participant may wish to hear. In response to receiving the microphone input, the participant's computer separates low-volume sounds from high-volume sounds. However, instead of suppressing this low-volume sound from the microphone input, the participant's computer renders this low-volume sound. In most cases, this low-volume sound represents ambient sound generated in the vicinity of the meeting participant. The participant's computer then mixes the low-volume sound with speech received from other conference participants to form output in such a way that the participant may distinguish this sound from the received speech.
    Type: Application
    Filed: May 5, 2015
    Publication date: November 10, 2016
    Inventors: Eugen Wige, Klaus Reindl
  • Publication number: 20160212420
    Abstract: A method for coding a sequence of digital images The invention refers to a method for coding a sequence of digital images (I), wherein the method uses a number of prediction modes for predicting values of pixels (P1) in the images (I) based on reconstructed values of pixels in image areas processed previously, where a prediction error (PE) between predicted values and the original values of pixels (P1) is processed for generating the coded sequence of digital images (CI). The invention is characterized in that a preset prediction mode is an intra-prediction mode based on pixels of a single image (I) and comprises steps i) and ii) as follows.
    Type: Application
    Filed: July 26, 2013
    Publication date: July 21, 2016
    Inventors: Peter AMON, Andreas HUTTER, André KAUP, Eugen WIGE
  • Publication number: 20160198158
    Abstract: A technique for encoding a video signal includes generating a representative value for each block of one or more video frames by applying a predetermined function to the pixels of the respective block. To perform a block matching operation for a current block, the technique applies the predetermined function to the current block and interrogates representative values of blocks at specified locations in a spatial and/or temporal vicinity of the current block to find a matching block whose representative value matches the one generated for the current block.
    Type: Application
    Filed: January 5, 2015
    Publication date: July 7, 2016
    Inventors: Eugen Wige, Steffen Schulze, Sascha Kuemmel
  • Publication number: 20150334417
    Abstract: A method is provided for coding a sequence of digital images, wherein the method uses a number of prediction modes for predicting values of pixels in the images based on reconstructed values of pixels in image areas processed previously, where a prediction error between predicted values and the original values of pixels is processed for generating the coded sequence of digital images.
    Type: Application
    Filed: December 18, 2012
    Publication date: November 19, 2015
    Inventors: Peter Amon, Andreas Hutter, André Kaup, Eugen Wige
  • Patent number: 8837850
    Abstract: A prediction error (eq[x,y]) is added to a predicted frame ({circumflex over (f)}[x,y]) or a predicted block for receiving a decoded frame (gq[x,y]) or a decoded block to be further used in a prediction loop by an encoder or to be sent to the output of a decoder. The reference frame (gq[x,y]) or the reference block includes a useful signal part and a noise signal part. The reference frame (gq[x,y]) or reference block pass through a dedicated noise reducing filter to reduce or eliminate the noise signal part of the reference frame (gq[x,y]) or reference block.
    Type: Grant
    Filed: January 12, 2011
    Date of Patent: September 16, 2014
    Assignee: Siemens Aktiengesellschaft
    Inventors: Peter Amon, Andreas Hutter, André Kaup, Eugen Wige
  • Publication number: 20120288213
    Abstract: A prediction error (eq[x,y]) is added to a predicted frame ({circumflex over (f)}[x,y]) or a predicted block for receiving a decoded frame (gq[x,y]) or a decoded block to be further used in a prediction loop by an encoder or to be sent to the output of a decoder. The reference frame (gq[x,y]) or the reference block includes a useful signal part and a noise signal part. The reference frame (gq[x,y]) or reference block pass through a dedicated noise reducing filter to reduce or eliminate the noise signal part of the reference frame (gq[x,y]) or reference block.
    Type: Application
    Filed: January 12, 2011
    Publication date: November 15, 2012
    Inventors: Peter Amon, Andreas Hutter, André Kaup, Eugen Wige