Identifying portions of a media stream
Aspects relate to tagging portions of streaming media such that one or more actions can be taken on the tagged portions. An action can be to remove a section of the streaming media. Another action can be to retain a portion of the streaming media, regardless of whether or not other portions are retained. Another action can be to replace content with different content. The tagging can be facilitated by the user of a lightweight embedded watermark. In another example, the tagging can be facilitated through the use of watermark types.
Latest Google Patents:
This disclosure relates to the identification of one or more portions of a media stream.
BACKGROUNDThe Internet and the development and continuing enhancement of media enabled portable computing devices have dramatically altered the processes for generating and consuming media content. For example, using a media capable device and with an Internet connection, users can consume media content almost anywhere and at almost any time. The convenience and accessibility of media content (e.g., on demand) through the Internet has resulted in the rapid grown of Internet media consumption.
Streaming is a common method of media delivery across the Internet. Streaming media can be continuously received and presented to an end-user while the media is being delivered by a streaming provider. Streaming allows media that includes large amounts of data to be displayed on a client device even if the entire media file has not yet been transmitted and/or received at the client device.
In an example, streaming media that is originally broadcast on a television, for example, might be rebroadcast, such as on a video sharing website and, therefore, can be readily available on a client device. In some cases, the original broadcast might occur in a first country and the rebroadcast might occur in a second country. Due to the nature of the broadcasts, there might be content in the original broadcast that should be excluded from the rebroadcast. In one example, advertisements presented in the first country might not be authorized for redistribution in the second country. Thus, such advertisements have to be removed before the media is rebroadcast. Identifying and removing or altering the content can be difficult and time consuming. Further, identification of the portions of the media stream that contain the content can be heavy, such that processing capabilities are limited.
SUMMARYThe following presents a simplified summary of the disclosure in order to provide a basic understanding of some aspects of the disclosure. This summary is not an extensive overview of the disclosure. It is intended to neither identify key or critical elements of the disclosure nor delineate any scope of particular embodiments of the disclosure, or any scope of the claims. Its sole purpose is to present some concepts of the disclosure in a simplified form as a prelude to the more detailed description that is presented later.
In accordance with one or more embodiments and corresponding disclosure, various non-limiting aspects are described in connection with the identification of content in one or more media streams. The content identification can be based on the identification of one or more portions of each media stream that contain the content. The portion(s) can be identified such that further processing of the content can be performed. The further processing can include exclusion of the identified portion(s) in a rebroadcast, specific inclusion of the identified portion(s) in a rebroadcast, alteration of one or more portions (e.g., replacing a set of content with a different set of content), and so forth.
An aspect relates to a system that can include a memory and a processor. The memory can store computer executable components. The processor can execute the computer executable components stored in the memory. The computer executable components can include a reception component that can receive a media stream that includes a plurality of segments. At least one segment of the plurality of segments can comprise a watermark embedded in the media stream. The computer executable components can also include a detection component that can distinguish the at least one segment based on the watermark and a processing component that can selectively process the at least one segment as a function of the watermark.
Another aspect relates to a method that can include using a processor to execute computer executable components stored in a memory. The method can include accepting an incoming media stream and detecting a presence of a watermark embedded in at least one portion of the incoming media stream. The method can also include selectively performing a function on the at least one portion of the incoming media stream based on an identification of the watermark.
A further aspect relates to a method that can include using a processor to execute computer executable components stored in a memory. The method can include accepting an incoming media stream and a supplementary stream. The method can also include detecting a signal in the supplementary stream. The signal can identify one or more portions of the incoming media stream. Further, the method can include selectively performing further processing on the one or more portions as a result of the detecting.
Still another aspect relates to a device that can include a memory that can store computer executable components and a processor that can execute the computer executable components stored in the memory. The device can include a reception component that can receive a media stream comprising a first portion comprising a first set of content and a second portion comprising a second set of content. The first portion can comprise a watermark embedded in the media stream. The device can also include a detection component that can distinguish the first portion from the second portion based on the watermark. Further, the device can include a processing component that can selectively process the first portion as a function of the watermark.
The following description and the annexed drawings set forth certain illustrative aspects of the disclosure. These aspects are indicative, however, of but a few of the various ways in which the principles of the disclosure may be employed. Other advantages and novel features of the disclosure will become apparent from the following detailed description of the disclosure when considered in conjunction with the drawings.
Various non-limiting implementations are further described with reference to the accompanying drawings in which:
Various embodiments or features of the subject disclosure are described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the subject disclosure. It may be evident, however, that the disclosed subject matter can be practiced without these specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures and components are shown in block diagram form in order to facilitate describing the subject disclosure.
It is to be appreciated that in accordance with one or more implementations described in this disclosure, users can opt-in or opt-out of providing personal information, demographic information, location information, proprietary information, sensitive information, or the like in connection with data gathering aspects. Moreover, one or more implementations described herein can provide for anonymizing collected, received, or transmitted data.
By way of introduction, the subject matter disclosed herein relates to a media stream comprising multiple sets of content. An identifier, such as a lightweight watermark or other means of identifying specific content (e.g., based on identification of portions of the media stream that contain the content) is utilized in connection with performing one or more actions with respect to the specific content. For example, an incoming media stream might have a first set of content (e.g., designated by a first portion of the media stream) that should be discarded (e.g., not included in further processing) and second set of content (e.g., designated by a second portion of the media stream) that should undergo further processing.
For example, the incoming media stream can be a live stream that is to be rebroadcast. However, the incoming media stream contains advertisements or other content (e.g., explicit scenes, violent scenes, and so forth) that should be excluded from processing (e.g., not included in the rebroadcast). Thus, various aspects relate to automatic filtering of content that should be excluded from further processing. In another example, the automatic filtering identifies content (e.g., distinguished from other content by location in the media stream) that should be included in the later processing. In an implementation, an identifier, such as a watermark, is embedded in the incoming stream and is used to identify the content (e.g., distinguished by the portion of the media stream where the content exists) that should be excluded, included, and/or another action performed (e.g., the content or indicated portion(s) of the media stream is replaced with different content).
In an implementation where content to be excluded is identified, one or more portions of the incoming media stream that contain the identifier (e.g., watermark) are detected and removed from the remaining portions of the incoming media stream, which are further processed (e.g., only process content that was not tagged with the identifier). In an implementation where content to be included is identified (e.g., based on its location in the media stream or based on other distinguishing characteristics), one or more portions of the incoming media stream that contain the identifier are included in the later processing (e.g., only process content that is tagged with the identifier).
In a related implementation, the media content is received in a first media stream and information related to the content to be included or excluded is received in a separate metadata stream. In an example, the information broadcast in the separate or supplemental stream can signal the location within the media stream where the content is located, whether the content should be kept, discarded, or whether another action should be performed (e.g., replace content with a different set of content).
One non-limiting implementation relates to a system that can include a memory and a processor. The memory can store computer executable components. The processor can execute the computer executable components stored in the memory. The computer executable components can include a reception component that can receive a media stream comprising a plurality of segments. At least one segment of the plurality of segments can be associated with an identifier. The computer executable components can also include a detection component that can distinguish the at least one segment based on the identifier. Further, the computer executable components can include a processing component that can selectively process the at least one segment as a function of the identifier.
In an implementation, the identifier can include an instruction related to inclusion or exclusion of the at least one segment. Further to this implementation, the processing component can selectively process the at least one segment as a result of the instruction.
In an example, the processing component can exclude the at least one segment and can process other segments of the plurality of segments. In another example, the processing component can process the at least one segment and can ignore other segments of the plurality of segments. The at least one segment can comprise a small duration relative to an overall stream duration.
According to an implementation, the detection component can distinguish a second segment based on a second identifier. Further to this implementation, the at least one segment and the second segment can be non-contiguous segments of the media stream. The system can further include a merge component that can stitch together the at least one segment and the second segment to create a continuous segment. Further to this implementation, the processing component can processes the continuous segment.
The system, according to another example, can also include a replacement component that can replace the at least one segment with a third segment. In another example, the identifier can be a watermark embedded in the media stream. In a further example, the at least one segment can be an advertisement.
According to another example, the reception component can receive a supplemental stream that can include the identifier at about the same time as the reception component receives the media stream.
Another non-limiting implementation relates to a method that can include using a processor to execute computer executable components stored in a memory. The method can include accepting an incoming media stream and detecting a presence of a watermark embedded in at least one portion of the incoming media stream. The method can also include selectively performing a function on the at least one portion of the incoming media stream based on an identification of the watermark.
In an implementation, the method can include decoding an instruction in the watermark. Selectively performing the function on the at least one portion can be determined based on the decoding.
In another implementation, selectively performing the function on the at least one portion can include excluding the at least one portion and processing other portions of the incoming media stream. In still another implementation, selectively performing the function on the at least one portion can include processing the at least one portion and ignoring other portions of the incoming media stream.
The method, according to another implementation, can include recognizing a second portion based on a second embedded watermark. Further to this implementation, the at least one portion and the second portion can be non-contiguous portions of the incoming media stream. The method can further include removing portions of the incoming media stream between the at least one portion and the second portion and merging the at least one portion and the second portion.
A further non-limiting implementation relates to a method that can include using a processor to execute computer executable components stored in a memory. The method can include accepting an incoming media stream and a supplementary stream and detecting a signal in the supplementary stream that identifies one or more portions of the incoming media stream. The method can also include selectively performing further processing on the one or more portions as a result of the detecting.
The method, according to an implementation, can include ascertaining the one or more portions are to be included in a rebroadcast. The method can also include processing the one or more portions for the rebroadcast and ignoring other portions of the incoming media stream that are not identified.
The method, according to another implementation, can include ascertaining the one or more portions are to be excluded from a rebroadcast. The method can further include ignoring the one or more portions and processing other portions of the incoming media stream that are not identified for the rebroadcast.
Still another non-limiting implementation relates to a device that can include a memory that can store computer executable components and a processor that can execute the computer executable components stored in the memory. Further, the device can include a reception component that can receive a media stream comprising a first portion comprising a first set of content and a second portion comprising a second set of content. The first portion can be associated with an identifier. The device can also include a detection component that can distinguish the first portion from the second portion based on the identifier. Further, the device can include a processing component that can selectively process the first portion as a function of the identifier.
In an implementation, the identifier can include an instruction to exclude the first portion and the processing component can exclude the first portion and can process the second portion. In another implementation, the identifier can include an instruction to include the first portion and the processing component can process the first portion and can ignore the second portion.
According to a further implementation, the detection component can distinguish a third portion comprising a third set of content based on a second identifier. The first portion and the third portion can be non-contiguous segments of the media stream. The device can further include a merge component that can stitch together the first portion and the third portion to create a continuous segment. The processing component can process the continuous segment.
Referring initially to
Various embodiments of the systems, apparatuses, and/or processes explained in this disclosure can constitute machine-executable components embodied within one or more machines, such as, for example, embodied in one or more computer readable mediums (or media) associated with one or more machines. Such component(s), when executed by the one or more machines (e.g., computer(s), computing device(s), virtual machine(s), and so on) can cause the machine(s) to perform the operations described.
System 100 can be included, at least partially, on a device 102. The device 102 can be for example, a server, a mobile phone, a desktop computer, a tablet computer, a laptop computer, a gaming device, and other types of communication devices. The device 102 can include a memory 104 that stores computer executable components and instructions. The device 102 can also include a processor 106 that executes the computer executable components stored in the memory 104. It should be noted that although one or more computer executable components may be described herein and illustrated as components separate from memory 104, in accordance with various embodiments, the one or more computer executable components could be stored in the memory 104.
In an embodiment, the device 102 includes a reception component 108 that can receive at least one media stream 110. The media can be streamed from a media source 112, which can include but is not limited to, a content server. Video streamed from the media source 112 can include video data (e.g., frames, stacks of image data, and so forth) and/or audio data. The media source 112 can employ any of a plurality of techniques for streaming video. For example, in one implementation, the media source 112 provides a first stream for video data (e.g., stacks of image data, frames, and so forth) and a second stream for audio data. Separate streams for video data and audio data can be combined (e.g., interleaved, multiplexed, and so forth), for consumption at the device 102.
The reception component 108 can facilitate processing of the media stream. For example, the reception component 108 can adapt, translate, or in some other manner, convert data provided by the media source 112 based on one or more sets of streaming criteria. In another example, the reception component 108 can be implemented as an application, or part of an application, on the device 102. For example, a reception component can be implemented as part of a browser application installed on the device 102.
In one implementation, the reception component 108 obtains, requests, or in some other manner receives a first stream for video data (e.g., video stream) associated with video, and a second stream for audio data (e.g., audio stream) associated with audio from the media source 112.
The at least one media stream 110 can comprise a plurality of segments, two of which are labeled as a first segment 114 and a second segment 116, each of which can be of a small duration (commonly expressed in terms of time) relative to an overall stream duration. According to an implementation, a segment can be an advertisement. At least one segment (e.g., the first segment 114) can be associated with an identifier 118, which can be associated with the segment before the initial broadcast occurs. For example, the identifier can be an invisible signal or an invisible pattern. In one example, the identifier can be a logo (e.g., an Olympic logo). In another example, the identifier can be a signal that has a unique property that can be detected by executing a program on the device. In an example, the identifier can be a watermark that is embedded in the media stream. However, the disclosed aspects are not limited to a watermark and another type of identifier or signal can be utilized. In another implementation, which will be described in further detail below, the identifiers are not embedded in the media stream but are streamed by the media source in a supplementary stream.
In another example, the identifier(s) can be a minor (or small) object that is placed in a scene of a video for a set number of frames. The minor object can serve as a marker. Thus, images (e.g., frames) without the minor object can be associated with a certain set of information. Thus, automatic detection of some scenes can be performed based on detection of the minor object. For example, if a short version of a video is desired, only the scenes that have the minor object are included and the other scenes are excluded from the rebroadcast.
The media source 112 may have equipment that allows for quick construction of the broadcast stream, wherein such equipment is not available at the device 102. Thus, according to an implementation, the identifiers are lightweight. The lightweight watermark technology can have certain properties, such as being robust to broadcasting over the air and to transcoding and/or resizing. Another property can be that the lightweight watermark can be discovered in an input stream within a fairly small segment of time (e.g., around two seconds of content, around one second of content, around three seconds of content, and so forth). Such quick discovery can help to ensure that the portion(s) that should be acted upon have not already been consumed before the discovery occurs (and too late for the actions to occur).
In an example, in the case of an embedded watermark, malicious transformation might not be considered as a function of the embedded watermark. Therefore, considerations for malicious interception of the media stream might not be included in the embedded watermark. Thus, the embedded watermark (or other identifier or signal) can have good temporal precision.
The device 102 also includes a detection component 120 that can be communicatively coupled to the reception component 108 (as well as other components). The detection component 120 can distinguish the first segment 114 based on the identifier 118, at about the same time as the reception component 108 is ingesting the stream. Further, according to an implementation, detection component 120 can detect at least the second segment 116, based on a second identifier 122 associated with the second segment 116.
Also included in the device 102 is a processing component 124 communicatively coupled to the reception component 108 and the detection component 120 (as well as other components). The processing component 124 can selectively process the at least one segment (e.g., first segment 114) as a function of the identifier 118. In an implementation, the processing component 124 can exclude at least one segment from the rebroadcast while processing other segments of the media stream, which are included in the rebroadcast.
In another implementation, the processing component 124 can process at least one segment (e.g., include the segment in the rebroadcast) while ignoring other segments of the media stream. For example, the other segments (or a portion thereof) might not be excluded from the rebroadcast stream. In another example, the other segments (or a portion thereof) might be simply passed through and not considered by the processing component 124 (e.g., included in the rebroadcast). In another example, the processing component 124 can replace content and/or can append content to the media stream for rebroadcast.
For example, the identifiers 118, 122 can include an instruction related to inclusion or exclusion of the at least one segment (e.g., first segment 114). Based on the instruction, the processing component 124 can selectively process the at least one segment (e.g., first segment 114).
In an example, there can be content that is captured by the media source 112 (e.g., an initial broadcaster) intended for broadcast television. For example, the captured content can be a live event such as the Olympics, another type of sporting event, political debates, and so forth. The captured content can be streamed to the device 102, which can rebroadcast the content. In one example, the content can be rebroadcast through a video sharing website. In another example, the content can be rebroadcast to receivers located in a different country or different countries.
The media stream(s) can include various content that should be removed before rebroadcast or further processing, specifically included in a rebroadcast, or replaced with other content prior to rebroadcast of the media stream (or a representation of the media stream). Thus, the processing component 124 can operate as a filter to selectively exclude, include, and/or replace one or more portions or segments of the broadcast stream. To facilitate the inclusion and/or exclusion of certain portions of the media stream(s) 110, processing component 124 can include an exclusion component 202 and an inclusion component 204, both communicatively coupled to the processing component 124 and/or other system components.
For example, included in the media stream 110 can be advertisements that are not authorized to be rebroadcast over a video sharing website or over a different source. In another example, included in the media stream 110 can be objectionable content (e.g., adult scenes, scenes of extreme violence, and so forth). In the case of an audio stream, the objectionable content can be explicit language, for example. Therefore, such content (e.g., the advertisements or other events) can be excluded from the rebroadcast by exclusion component 202. In order to be robust to small detection errors (e.g., less than a small number of seconds, such as 5 seconds, for example) holes in the detection that are less than the small number of seconds can be covered.
In another example, an identification of the media source 112, authorship of the content, or other features (e.g., an Olympics logo) might need to be included in the rebroadcast stream by the inclusion component 204, regardless of the processing of other content. For example, if a live video of the Olympics are being streamed by the media source 112, frames or segments of the stream that include the Olympic logo might need to be included in the rebroadcast. However, frames or segments that do not include the logo can be removed prior to the rebroadcast, such as to shorten a duration of the rebroadcast to fit within an allotted time slot (e.g., removed due to time constraints). For example, the segments without the logo (or other identifier) can be included and/or excluded at the discretion of the entity that is rebroadcasting the media stream.
As illustrated, the first segment 114 and the second segment 116 are non-contiguous segments of the media stream 110. For example, a third segment 206 and a fourth segment 208 can be located between the first segment 114 and the second segment 116. Also included in the illustrated example media stream 110 are a fifth segment 210 and a sixth segment 212. Although the disclosed aspects are illustrated and described with respect to a media stream having six segments, the disclosed aspects are not limited to six segments. Instead, any number of segments can be included in a media stream.
In one example, the identifiers 118, 122 provide instructions indicating that first segment 114 and second segment 116 are to be included in a rebroadcast of the media stream 110. Due to various considerations, other segments of the media stream 110 are to be removed. For example, if the rebroadcast has a temporal constraint, which is shorter than the length of the media stream 110, a decision might be made (e.g., by an entity in control of the device) to remove one or more segments. In another example, one or more segments between the first segment 114 and the second segment 116 might include indicators that instruct removal of those segments. Therefore, processing component 124 might remove third segment 206 and/or fourth segment 208. At about the same time as the third segment 206 and the fourth segment 208 are removed, a merge component 214 (communicatively coupled to other system components) stitches together first segment 114 and second segment 116 such that there is no delay (or a very small delay) between the end of the first segment 114 and the beginning of the second segment 116 (e.g., due to the removal of intervening segments). For example, the merge component 214 can create a continuous segment by seamlessly stitching together the first segment 114 and the second segment 116.
In an implementation, the device 102 can include an output component 216 (communicatively coupled to other system components) that can rebroadcast a representation of the media stream (e.g., a modified version of the media stream) to one or more client devices 218. A first representation of the media stream, referred to as a rebroadcast stream 220, is illustrated, wherein the first segment 114 and the second segment 116 are included in the rebroadcast while other segments are removed from the rebroadcast. It should be noted that other configurations of the rebroadcast are possible with the disclosed aspects. For example, one or more other segments (or all segments) can be associated with an indicator, wherein one or more of the other segments are selectively included in the rebroadcast stream 220 by the inclusion component 204 as a function of each of the respective indicators. Although the indicators are illustrated as included in the rebroadcast, according to an aspect, the indicators can be removed prior to the rebroadcast.
In another example, the identifiers 118, 122 provide instructions that first segment 114 and second segment 116 are to be excluded from a rebroadcast of the media stream 110. In this case, reception component 108 receives the media stream 110 and detection component 120 distinguishes the first segment 114 and the second segment 116 from the other segments (e.g., third segment 206, fourth segment 208, and sixth segment 212). Processing component 124 removes the first segment 114 and the second segment 116 as a result of the instructions provided in the respective identifiers 118, 122. The merge component 214 can stitch together the third segment 206 and the fifth segment 210. Further, the merge component 214 can merge fourth segment 208 and sixth segment 212, since the second segment 116 was removed.
For example purposes, another rebroadcast stream 222 is illustrated, wherein both first segment 114 and second segment 116 are excluded from the rebroadcast. It should be noted that other configurations of the rebroadcast are possible with the disclosed aspects. For example, one or more other segments (or all segments) can be associated with an indicator, wherein one or more of the other segments are selectively excluded from the rebroadcast by the exclusion component 202 as a function of each of the respective indicators.
In yet another example, the first identifier 118 can provide instructions that the first segment 114 is to be included in the rebroadcast. Further, the second identifier 122 can provide instructions that the second segment 116 is to be excluded from the rebroadcast in this example. Based on these instructions, inclusion component 204 retains the first segment 114 and exclusion component 202 removes the second segment 116 from the rebroadcast. Merge component 214 merges third segment 206 and sixth segment 212 to mitigate the effects of the removal of the second segment 116.
In one implementation, everything in a media stream might be included in a rebroadcast except for the advertisements. The TV station (or another media source 112) can execute a program that can embed a watermark, for example, in the video before streaming the video to many receivers (one of which can be device 102). Thus, device 102 can make the slight change (e.g., removal of the ads) to the video and can rebroadcast the video. For example, detection component 120 can determine that the watermark starts at an approximate location and ends at a second approximate location, wherein processing component 124 selectively removes the content between the start and the end of the watermark.
In another implementation, the media source 112 executes a program that processes the video and encodes the featured content (e.g., a movie). Thus, content without the watermark is selectively removed by the processing component.
In a further implementation, the identifiers can be provided based on targeted information, such as certain demographics where adult content can be masked from viewing by certain individuals. Thus, portions of the video that should be removed are identified by the watermark. In another implementation, instead of removing a video portion, the video portion is included, however, an associated audio portion is muted (e.g., in the case of explicit language). Thus, any content can be filtered by the disclosed aspects. Further, the disclosed aspects can apply to a region (e.g., based on location detection or an expected location where the rebroadcast will occur), a demographic, or other criteria. In addition, there might be cases where a receiving station has the option to apply one or more alterations for the rebroadcast.
In order to identify which portions of the media stream can be (or cannot be) rebroadcast, the media source 112 can track the content to be included, excluded, replaced, and so forth, during the broadcasting. For example, the location of the content (e.g., start location and end location) can be tracked. In another example, a start time (e.g., timestamp) and an end time for the content can be documented. The tracking of the content (e.g., distinguished by location, timestamp, and so forth) can be recorded in a supplementary stream 404, which can be a separate metadata stream according to an aspect. The media source 112 can broadcast the one or more media streams 402 and, at substantially the same time, broadcast the supplementary stream 404.
In an example, the content (e.g., location, timestamp, or other distinguishing feature) can be tracked in the supplementary stream 404 utilizing a Boolean (or a different enumerated type), which can signal whether the content should be kept or discarded. For example, a custom field can be included, which can be an enumeration of potential actions. In one example, each watermark type can have different characteristics and different actions to be taken based upon watermark detection. In one example, there can be five types of watermarks that are pre-embedded (or known by) the device 102. For example, in a header of a video there can be a “watermark type”, where “0” means no watermark, “1” indicates a first action should be taken (e.g., inclusion), “2” indicates a second action should be taken (e.g., exclusion), “3” indicates a third action should be taken (e.g., replacement), and so forth.
The reception component 108 can receive the one or more media streams 402 and the supplemental stream(s) 404 and detection component 120 can associate the signals in the supplemental stream(s) 404 with one or more portions of the media stream(s) 402. As a function of the signals, the exclusion component 202 can selectively remove at least one portion and, if necessary, merge component 214 can stitch together other portions of the media stream.
Additionally or alternatively, the inclusion component 204 can selectively include at least one portion in a rebroadcast media stream 406, which can be received by one or more client devices 218. In another example, replacement component 302 can selectively replace various portions of the media stream 402 with different content.
Method 500 starts, at 502, when an incoming media stream (or multiple incoming media streams) are accepted (e.g., using a reception component). For example, an incoming media stream can be a video, which can be a video of a live event or can include prerecorded content. In another example, a first incoming media stream can include video content and a second incoming media stream can include audio content.
At 504, presence of an indicator is detected (e.g., using a detection component). For example, the indicator can be a watermark embedded in at least one portion of the incoming media stream. The embedded watermark can be a lightweight watermark that does not attempt to prevent interception of the media stream (e.g., has good temporal precision). This can allow the media stream to be received and processed by various devices that might not have the processing capabilities to process a heavy watermark. However, according to other aspects, the watermark is one that does attempt to prevent interception of the media stream. A lightweight watermark can have little, if any, negative effects on the quality of an image and/or the audio.
A function is selectively performed on at the at least one portion of the incoming media stream, at 506 (e.g., using a processing component). The function performed can be based, in part, on the detection of the indicator and interpretation of instructions or other information included in the indicator or represented by the indicator.
In an example, a stream can include television (TV) guide information. The TV guide stream can be parsed to distinguish an advertisement from the other content. An identifier can indicate that the advertisement should be included and that the feature program should be included, for example.
For example, the tag can provide an indication that the at least one portion should not be included in the rebroadcast. Thus, at 608, the at least one portion is excluded in the rebroadcast (e.g., using an exclusion component). At 610, the other portions of the incoming media stream are processed (e.g., using an inclusion component or a processing component). The other portions that are processed can be output, at 612 (e.g., using an output component) as a rebroadcast of at least a portion of the incoming media stream (e.g., in this example the rebroadcast includes the other portions).
In another example, the tag can provide an indication that the at least one portion should be included in the rebroadcast. Thus, at 614, the at least one portion is processed (e.g., using an inclusion component or a processing component). At 616, the other portions (or at least a subset thereof) of the incoming media stream are ignored (e.g., using an exclusion component). One or more media streams that include at least the one portion are rebroadcast at 612 (e.g., using an output component).
A function is selectively performed on at least one portion of the incoming media stream based on the identifier, at 706 (e.g., using a processing component). For example, the identifier might indicate that the portion should be excluded from a rebroadcast (e.g., using an exclusion component). In another example, the identifier might indicate that the portion should be specifically (or exclusively) included in the rebroadcast (e.g., using an inclusion component).
In an implementation, at 708, a second portion of the incoming media stream is recognized based on a second identifier (e.g., using a detection component). In an aspect, the one portion of the media stream and the second portion can be contiguous portions. However, according to some implementations, the portions are non-contiguous portions. Further to this implementation, method 700 can continue, at 710, where portions of the incoming media stream between the one portion and the second portion are removed (e.g., using an exclusion component). The one portion and the second portion can be stitched together, at 712 (e.g., using a merge component). The stitching allows the second portion to be perceived after the one portion has been perceived, such that there is no (or very little) perceptible delay between the two portions.
According to an implementation, the further processing (at 806) includes ascertaining, at 808, that the one or more portions are to be included in a rebroadcast (e.g., using a detection component). For example, the signal can include instructions that specifically indicate that the one or more portions are to be included in the rebroadcast. At 810, the one or more portions are processed for the rebroadcast (e.g., using an inclusion component). Other portions of the incoming media stream, which are not identified, are ignored, at 810 (e.g., using an exclusion component). For example, the portions that are not identified can be removed from the media stream for the rebroadcast. The one or more portions that are not identified can be merged such that there is no delay, or only a slight delay, between the portions (e.g., where other portions have been removed).
In another implementation, the further processing (at 806) can include ascertaining, at 814, that the one or more portions are to be excluded from a rebroadcast (e.g., using a detection component). For example, the signal can include instructions that specifically indicate that the one or more portions are to be excluded from the rebroadcast. At 816, the one or more portions are ignored (e.g., using an exclusion component). In an example, ignoring the one or more portions can include removing the one or more portions from the media stream. The other portions of the incoming media stream that are not identified are processed, at 818, (e.g., using an inclusion component) for the rebroadcast.
With reference to
The system bus 908 can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Card Bus, Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), Firewire (IEEE 1394), and Small Computer Systems Interface (SCSI).
The system memory 906 includes volatile memory 910 and non-volatile memory 912. The basic input/output system (BIOS), containing the basic routines to transfer information between elements within the computer 902, such as during start-up, is stored in non-volatile memory 912. In addition, according to an embodiment, codec 905 may include at least one of an encoder or decoder, wherein the at least one of an encoder or decoder may consist of hardware, a combination of hardware and software, or software. Although, codec 905 is depicted as a separate component, codec 905 may be contained within non-volatile memory 912. By way of illustration, and not limitation, non-volatile memory 912 can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory 910 includes random access memory (RAM), which acts as external cache memory. According to various embodiments, the volatile memory may store write operation retry logic (not shown in
Computer 902 may also include removable/non-removable, volatile/non-volatile computer storage medium.
It is to be appreciated that
A user enters commands or information into the computer 902 through input device(s) 928 (e.g., a user interface). Input devices 928 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to the processing unit 904 through the system bus 908 via interface port(s) 930. Interface port(s) 930 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). Output device(s) 936 use some of the same type of ports as input device(s) 928. Thus, for example, a USB port may be used to provide input to computer 902, and to output information from computer 902 to an output device 936. Output adapter 934 is provided to illustrate that there are some output devices 936 such as monitors, speakers, and printers, among other output devices 936, which require special adapters. The output adapters 934 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device 936 and the system bus 908. It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 938.
Computer 902 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 938 (e.g., a family of devices). The remote computer(s) 938 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device, a smart phone, a tablet, or other network node, and can include many of the elements described relative to computer 902. For purposes of brevity, only a memory storage device 940 is illustrated with remote computer(s) 938. Remote computer(s) 938 is logically connected to computer 902 through a network interface 942 and then connected via communication connection(s) 944. Network interface 942 encompasses wire and/or wireless communication networks such as local-area networks (LAN) and wide-area networks (WAN) and cellular networks. LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet, Token Ring and the like. WAN technologies include, but are not limited to, point-to-point links, circuit switching networks such as Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL).
Communication connection(s) 944 refers to the hardware/software employed to connect the network interface 942 to the system bus 908. While communication connection 944 is shown for illustrative clarity inside computer 902, it can also be external to computer 902. The hardware/software necessary for connection to the network interface 942 includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and wired and wireless Ethernet cards, hubs, and routers.
Referring now to
Communications can be facilitated via a wired (including optical fiber) and/or wireless technology. The client(s) 1002 include or are operatively connected to one or more client data store(s) 1008 that can be employed to store information local to the client(s) 1002 (e.g., associated contextual information). Similarly, the server(s) 1004 operatively include or are operatively connected to one or more server data store(s) 1010 that can be employed to store information local to the servers 1004.
The illustrated aspects of the disclosure may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote memory storage devices.
Moreover, it is to be appreciated that various components described in this description can include electrical circuit(s) that can include components and circuitry elements of suitable value in order to implement the embodiments of the subject disclosure. Furthermore, it can be appreciated that many of the various components can be implemented on one or more integrated circuit (IC) chips. For example, in one embodiment, a set of components can be implemented in a single IC chip. In other embodiments, one or more of respective components are fabricated or implemented on separate IC chips.
What has been described above includes examples of various embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the one or more aspects, but it is to be appreciated that many further combinations and permutations of the various aspects are possible. Accordingly, the subject disclosure is intended to embrace all such alterations, modifications, and variations. Moreover, the above description of illustrated embodiments of the subject disclosure, including what is described in the Abstract, is not intended to be exhaustive or to limit the disclosed embodiments to the precise forms disclosed. While specific embodiments and examples are described in this disclosure for illustrative purposes, various modifications are possible that are considered within the scope of such embodiments and examples, as those skilled in the relevant art can recognize.
In particular and in regard to the various functions performed by the above described components, devices, circuits, systems and the like, the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., a functional equivalent), even though not structurally equivalent to the disclosed structure, which performs the function in the disclosed illustrated exemplary aspects of the disclosed subject matter. In this regard, it will also be recognized that the aspects include a system as well as a computer-readable storage medium having computer-executable instructions for performing the acts and/or events of the various methods of the claimed subject matter.
The aforementioned systems/circuits/modules have been described with respect to interaction between several components/blocks. It can be appreciated that such systems/circuits and components/blocks can include those components or specified sub-components, some of the specified components or sub-components, and/or additional components, and according to various permutations and combinations of the foregoing. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components (hierarchical). Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components, and any one or more middle layers, such as a management layer, may be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described in this disclosure may also interact with one or more other components not specifically described in this disclosure but known by those of skill in the art. Although the components described herein are primarily described in connection with performing respective acts or functionalities, it is to be understood that in a non-active state these components can be configured to perform such acts or functionalities.
In addition, while a particular feature may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “includes,” “including,” “has,” “contains,” variants thereof, and other similar words are used in either the detailed description or the claims, these terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
As used in this application, the terms “component”, “module”, “system”, or the like are generally intended to refer to a computer-related entity, either hardware (e.g., a circuit), a combination of hardware and software, software, or an entity related to an operational machine with one or more specific functionalities. For example, a component may be, but is not limited to being, a process running on a processor (e.g., digital signal processor), a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Further, a “device” can come in the form of specially designed hardware; generalized hardware made specialized by the execution of software thereon that enables the hardware to perform specific functions; software stored on a computer readable storage medium; software transmitted on a computer readable transmission medium; or a combination thereof.
Moreover, the words “example” or “exemplary” are used in this disclosure to mean serving as an example, instance, or illustration. Any aspect or design described in this disclosure as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form.
Reference throughout this specification to “one implementation,” or “an implementation,” or “one embodiment,” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the implementation or embodiment is included in at least one implementation or one embodiment. Thus, the appearances of the phrase “in one implementation,” or “in an implementation,” or “in one embodiment,” or “in an embodiment” in various places throughout this specification can, but are not necessarily, referring to the same implementation or embodiment, depending on the circumstances. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more implementations or embodiments.
Computing devices typically include a variety of media, which can include computer-readable storage media and/or communications media, in which these two terms are used in this description differently from one another as follows. Computer-readable storage media can be any available storage media that can be accessed by the computer, is typically of a non-transitory nature, and can include both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable storage media can be implemented in connection with any method or technology for storage of information such as computer-readable instructions, program modules, structured data, or unstructured data. Computer-readable storage media can include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible and/or non-transitory media which can be used to store desired information. Computer-readable storage media can be accessed by one or more local or remote computing devices, for example, via access requests, queries, or other data retrieval protocols, for a variety of operations with respect to the information stored by the medium.
On the other hand, communications media typically embody computer-readable instructions, data structures, program modules or other structured or unstructured data in a data signal that can be transitory such as a modulated data signal, for example, a carrier wave or other transport mechanism, and includes any information delivery or transport media. The term “modulated data signal” or signals refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in one or more signals. By way of example, and not limitation, communication media include wired media, such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
In addition, while a particular feature of the disclosed aspects may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “includes,” “including,” “has,” “contains,” variants thereof, and other similar words are used in either the detailed description or the claims, these terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
Claims
1. A system, comprising:
- a processor; and
- a memory communicatively coupled to the processor, the memory having stored therein computer-executable instructions, comprising:
- a reception component configured to:
- receive a media stream comprising a plurality of temporal segments, at least one of the temporal segments of the media stream including logo information; and
- receive a supplemental stream, at substantially the same time as receiving the media stream, comprising at least one logo indicator, wherein each logo indicator identifies a temporal segment of the plurality of temporal segments and specifies an action to take on the temporal segment; and
- a processing component that selectively processes respective temporal segments in a rebroadcast of the media stream as a function of the at least one logo indicator.
2. The system of claim 1, wherein the action comprises at least one of include the temporal segment in a rebroadcast of the media stream or exclude the temporal segment from the rebroadcast of the media stream and merge a temporal segment immediately preceding the temporal segment with another temporal segment immediately following the temporal segment.
3. The system of claim 2, wherein the processing component includes the temporal segment in the rebroadcast of the media stream in response to the indicator specifying to include the temporal segment in the rebroadcast of the media stream.
4. The system of claim 2, wherein the processing component excludes the temporal segment from the rebroadcast of the media stream and merges the temporal segment immediately preceding the temporal segment with the other temporal segment immediately following the temporal segment in response to the indicator specifying to exclude the temporal segment from the rebroadcast of the media stream and merge the temporal segment immediately preceding the temporal segment with the other temporal segment immediately following the temporal segment.
5. The system of claim 1, wherein the action comprises at least one of include the temporal segment in the rebroadcast of the media stream, exclude the temporal segment from the rebroadcast of the media stream and merge a temporal segment immediately preceding the temporal segment with another temporal segment immediately following the temporal segment, or replace the temporal segment with a different temporal segment.
6. The system of claim 1, wherein the temporal segment comprises explicit language.
7. The system of claim 5, wherein the processing component further comprises a replacement component that replaces the temporal segment with the different temporal segment in response to the indicator specifying to replace the temporal segment with the different temporal segment.
8. The system of claim 1, wherein the temporal segment is an advertisement.
9. The system of claim 1, wherein the temporal segment comprises adult content.
10. A method, comprising:
- receiving a media stream comprising a plurality of temporal segments, at least one of the temporal segments including logo information;
- receiving a supplemental stream, concurrent to receiving the media stream, comprising at least one logo indicator, wherein each logo indicator identifies a temporal segment of the plurality of temporal segments and specifies an action to take on the temporal segment; and
- selectively performing actions on respective temporal segments for a rebroadcast of the media stream based on the at least one logo indicator.
11. The method of claim 10, wherein the action comprises at least one of include the temporal segment in a rebroadcast of the media stream or exclude the temporal segment from the rebroadcast of the media stream and merge a temporal segment immediately preceding the temporal segment with another temporal segment immediately following the temporal segment.
12. The method of claim 11, wherein the selectively performing the action on the temporal segment comprises including the temporal segment in the rebroadcast of the media stream in response to the indicator specifying to include the temporal segment in the rebroadcast of the media.
13. The method of claim 11, wherein the selectively performing the action on the temporal segment comprises excluding the temporal segment from the rebroadcast of the media stream and merging a temporal segment immediately preceding the temporal segment with the other temporal segment immediately following the temporal segment in response to the indicator specifying to exclude the temporal segment from the rebroadcast of the media stream and merge the temporal segment immediately preceding the temporal segment with the other temporal segment immediately following the temporal segment.
14. The method of claim 10, wherein the action comprises at least one of include the temporal segment in the rebroadcast of the media stream, exclude the temporal segment from the rebroadcast of the media stream and merge a temporal segment immediately preceding the temporal segment with another temporal segment immediately following the temporal segment, or replace the temporal segment with a different temporal segment.
15. A non-transitory computer-readable medium having instructions stored thereon that, in response to execution, cause a system including a processor to perform operations comprising:
- receiving a media stream comprising a plurality of temporal segments, at least one of the temporal segments of the media stream including logo information;
- receiving a supplementary stream, substantially simultaneous to receiving the media stream, comprising at least one logo indicator, wherein each logo indicator identifies a temporal segment of the plurality of temporal segments and specifies an action to take on the temporal segment; and
- selectively performing actions on respective temporal segments for a rebroadcast of the media stream based upon the at least one logo indicator.
16. The non-transitory computer-readable medium of claim 15, wherein the action comprises at least one of include the temporal segment in a rebroadcast of the media stream or exclude the temporal segment from the rebroadcast of the media stream and merge a temporal segment immediately preceding the temporal segment with another temporal segment immediately following the temporal segment.
17. The non-transitory computer-readable medium of claim 16, wherein the selectively performing the action on the temporal segment comprises including the temporal segment in the rebroadcast of the media stream in response to the indicator specifying to include the temporal segment in the rebroadcast of the media.
18. A system, comprising:
- a processor; and
- a memory coupled to the processor, the memory having stored therein computer-executable instructions comprising:
- means for receiving a media stream comprising a plurality of temporal segments, at least one of the temporal segments including logo information;
- means for receiving a supplemental stream, at substantially the same time as receiving the media stream, comprising the at least one logo indicator, wherein each logo indicator identifies a temporal segment of the plurality of temporal segments and specifies an action to take on the temporal segment; and
- means for selectively performing actions on respective temporal segments for a rebroadcast of the media stream as a function of the logo indicator.
19. The system of claim 18, wherein the action comprises at least one of include the temporal segment in a rebroadcast of the media stream or exclude the temporal segment from the rebroadcast of the media stream and merge a temporal segment immediately preceding the temporal segment with another temporal segment immediately following the temporal segment.
20. The system of claim 19, wherein the means for selectively performing the actions comprises means for including the temporal segment in the rebroadcast of the media stream in response to the indicator specifying to include the temporal segment in the rebroadcast of the media.
21. The system of claim 19, wherein the means for selectively performing the actions comprises means for excluding the temporal segment from the rebroadcast of the media stream and merging the temporal segment immediately preceding the temporal segment with the other temporal segment immediately following the temporal segment in response to the indicator specifying to exclude the temporal segment from the rebroadcast of the media stream and merge the temporal segment immediately preceding the temporal segment with the other temporal segment immediately following the temporal segment.
6182218 | January 30, 2001 | Saito |
8086491 | December 27, 2011 | Matz et al. |
8144923 | March 27, 2012 | Zhao et al. |
20060136980 | June 22, 2006 | Fulcher et al. |
20070250901 | October 25, 2007 | McIntire et al. |
20080155590 | June 26, 2008 | Soukup et al. |
20100169503 | July 1, 2010 | Kollmansberger et al. |
20110115977 | May 19, 2011 | Simpson et al. |
20110200300 | August 18, 2011 | Barton et al. |
20110231565 | September 22, 2011 | Gelter et al. |
20110262103 | October 27, 2011 | Ramachandran et al. |
20120042247 | February 16, 2012 | Harper et al. |
20120116883 | May 10, 2012 | Asam et al. |
20120117221 | May 10, 2012 | Katpelly et al. |
20120131219 | May 24, 2012 | Brannon, Jr. |
20130208187 | August 15, 2013 | Bhogal et al. |
20140006635 | January 2, 2014 | Braness et al. |
- “Audio watermark detection,” Wikipedia, http://en.wikipedia.org/wiki/Audio—watermark—detection, Last accessed Feb. 22, 2012, 2 pages.
- “Digital watermarking,” Wikipedia, http://en.wikipedia.org/wiki/Digital—watermarking, Last accessed Feb. 22, 2012, 6 pages.
Type: Grant
Filed: Aug 15, 2012
Date of Patent: Feb 23, 2016
Assignee: Google Inc. (Mountain View, CA)
Inventors: Gheorghe Postelnicu (Zurich), Sai Suman Cherukuwada (Adliswil)
Primary Examiner: Brian J Gillis
Assistant Examiner: Steve Lin
Application Number: 13/585,966
International Classification: H04L 29/06 (20060101);