Including Teletext Decoder Or Display Patents (Class 348/468)
  • Patent number: 11551013
    Abstract: Technologies are provided for automated quality assessment of translations. In some embodiments, quality of a translation can be assessed by generating a machine-learning (ML) model that classifies the translation as pertaining to one of three quality categories. A first quality category can include, for example, translations that are deemed satisfactory. A second quality category can include, for example, translations that are deemed subject to edition prior to being deemed satisfactory. A third quality category can include, for example, translations that are deemed unsatisfactory. The generated ML model can then be applied to the translation and a corresponding sentence in a source language in order to classify the translation as pertaining to one of the three categories.
    Type: Grant
    Filed: March 2, 2020
    Date of Patent: January 10, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Prabhakar Gupta, Anil Kumar Nelakanti
  • Patent number: 11551722
    Abstract: Systems and processes are provided for interactive reassignment of character names in an audio video program including a tuner configured for receiving and demodulating a video signal to extract the audio video program, a user input operative to receive a user request to substitute an original character name within the audio video program with an alternative character name, a memory configured to buffer the audio video program to generate a delayed audio video program, a processor configured to detect the original character name within the audio video program and to replace the original character name with the alternative character name within the delayed audio video program to generate a modified audio video program, and a loudspeaker configured to reproduce the alternative character name in response to the modified audio video program.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: January 10, 2023
    Inventor: Sriram Prakash
  • Patent number: 11431942
    Abstract: The disclosed method includes accessing video content encoded at a specified frame rate, and determining a refresh rate for an electronic display on which the video content is to be presented. The method next includes specifying a time interval for the video content over which frame rate conversion is to be applied to synchronize the video content frame rate with the electronic display refresh rate. The method also includes presenting the video content on the electronic display where the playback speed is adjusted for a first part of the interval. At this adjusted speed, the interval is played back using original video frames and multiple frame duplications. The presenting also adjusts playback speed of a second part of the interval. At the adjusted speed, the interval is played back using the original frames and a different number of frame duplications. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: August 30, 2022
    Assignee: Netflix, Inc.
    Inventors: Weiguo Zheng, Rex Yik Chun Ching
  • Patent number: 11418849
    Abstract: Systems and methods are described herein for inserting emoticons within a media asset based on an audio portion of the media asset. Each audio portion of a media asset is associated with a respective part of speech, and an emotion corresponding to the audio portion for the media asset is determined. A corresponding emoticon is identified based on the determined emotion in the audio portion and presented in the subtitles.
    Type: Grant
    Filed: October 22, 2020
    Date of Patent: August 16, 2022
    Assignee: Rovi Guides, Inc.
    Inventors: Ankur Anil Aher, Charishma Chundi
  • Patent number: 11412291
    Abstract: Disclosed is an electronic device capable of acquiring a second signal obtained by converting a time characteristic of a first signal received through a microphone based on a value defined corresponding to a voice, acquiring information on a surrounding environment based on a frequency characteristic of the acquired second signal, and adjusting an audio characteristic of content based on the acquired information on the surrounding environment.
    Type: Grant
    Filed: January 7, 2021
    Date of Patent: August 9, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jaemyung Hur, Kyoungshin Jin
  • Patent number: 11367282
    Abstract: A subtitle extraction method includes decoding a video to obtain video frames; performing adjacency operation in a subtitle arrangement direction on pixels in the video frames to obtain adjacency regions in the video frames; and determining certain video frames including a same subtitle based on the adjacency regions, and subtitle regions in the certain video frames including the same subtitle based on distribution positions of the adjacency regions in the video frames including the same subtitle. The method also includes constructing a component tree for at least two channels of the subtitle regions and using the constructed component tree to extract a contrasting extremal region corresponding to each channel; performing color enhancement processing on the contrasting extremal regions of the at least two channels to form a color-enhanced contrasting extremal region; and extracting the subtitle by merging the color-enhanced contrasting extremal regions of at least two channels.
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: June 21, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Xingxing Wang
  • Patent number: 11356721
    Abstract: Content is automatically removed from closed-caption data embedded in a video signal. A closed-caption compliance system decodes a first portion of a first video frame that includes a closed-caption data packet. The system extracts a first character string that includes at least a portion of the closed-caption data packet. The system determines whether the first character string is suitable for distribution to viewers. If the first character string is suitable for distribution to viewers, then the system encodes at least a first portion of the first character string as a second closed-caption data packet to include in a second video frame. Otherwise, the system modifies the first character string to generate a second character string that is suitable for distribution to viewers; and encodes at least a first portion of the second character string as a third closed-caption data packet to include in the second video frame.
    Type: Grant
    Filed: February 13, 2019
    Date of Patent: June 7, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: Michael Strein, William McLaughlin
  • Patent number: 11347792
    Abstract: A video abstract generation method is provided. The method includes obtaining a target searching condition; searching a video database for structured image data meeting the target searching condition, the structured image data being stored in the video database in a structured data format; and performing video synthesis on the structured image data meeting the target searching condition, to generate a video abstract.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: May 31, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Hao Zhang, Xiang Qi Huang, Yong Jun Chen
  • Patent number: 11336902
    Abstract: The disclosed computer-implemented method may include receiving, from a client device, a video and data about at least one specialized construct applied to the video. The method may also include detecting, based on the data about the specialized construct, a region of interest to apply the specialized construct to the video. Additionally, the method may include reapplying the specialized construct to the video at the region of interest. Furthermore, the method may include encoding the video by prioritizing bit rate allocation for the region of interest containing the specialized construct. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: May 17, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Shankar Lakshmi Regunathan, Haixiong Wang
  • Patent number: 11172266
    Abstract: Embodiments are directed towards the analysis of the audiovisual content to adjust the timing, duration, or positioning of closed captioning so that the closed captioning more closely aligns with the scene being presented. Content that incudes video, audio, and closed captioning is obtained, and the audio is converted to text. A duration and timing for the closed captioning is determined based on a comparison between the closed captioning and the audio text. Scene context is determined for the content based on analysis of the video and the audio, such as by employing trained artificial neural networks. A display position of the closed captioning is determined based on the scene context. The duration and timing of the closed captioning are modified based on the scene context. The video and closed captioning are provided to a content receiver for presentation to a user based on the display position, duration, and timing.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: November 9, 2021
    Assignee: SLING MEDIA, L.L.C.
    Inventor: Mudit Mathur
  • Patent number: 11151193
    Abstract: An apparatus, method, system and computer-readable medium are provided for generating one or more descriptors that may potentially be associated with content, such as video or a segment of video. In some embodiments, a teaser for the content may be identified based on contextual similarity between words and/or phrases in the segment and one or more other segments, such as a previous segment. In some embodiments, an optical character recognition (OCR) technique may be applied to the content, such as banners or graphics associated with the content in order to generate or identify OCR'd text or characters. The text/characters may serve as a candidate descriptor(s). In some embodiments, one or more strings of characters or words may be compared with (pre-assigned) tags associated with the content, and if it is determined that the one or more strings or words match the tags within a threshold, the one or more strings or words may serve as a candidate descriptor(s).
    Type: Grant
    Filed: May 22, 2015
    Date of Patent: October 19, 2021
    Assignee: Comcast Cable Communications, LLC
    Inventors: Geetu Ambwani, Robert Rubinoff
  • Patent number: 11134317
    Abstract: System and devices for live captioning events is disclosed. The system may receive event calendar data and a first plurality of caption files and preselect a first caption file based on the event calendar data. The system may then access an audiovisual recorder of a user device, and receive a first feedback from the recorder. The system may then determine whether the first caption file matches the first feedback. When there is a match, the system may determine a first synchronization between the caption file and the feedback. When there is no match, the system may determine if there is a match with a second caption file of the first plurality of caption files and determine a second synchronization. When the second caption file does not match, the system may receive at least a third caption file over a mobile network and determine a third synchronization for display.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: September 28, 2021
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Galen Rafferty, Austin Walters, Jeremy Edward Goodsitt, Vincent Pham, Mark Watson, Reza Farivar, Anh Truong
  • Patent number: 11100096
    Abstract: A method includes receiving, at an analysis server from a user device, a keyword associated with content of interest. The method includes retrieving, at the analysis server from a database, searchable tag data for first searchable tags in the database. Each searchable tag of the searchable tags corresponds to a segment of stored media content. The stored media content is associated with the user device. The first searchable tags pertain to the keyword. The searchable tag data includes an initial relevancy score and a corresponding aging factor for each first searchable tag of the first searchable tags. The method also includes generating, at the analysis server, a list of media content segments based on the searchable tag data and sending the list from the analysis server to the user device. The list is ordered based on the initial relevancy scores modified by the corresponding aging factors.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: August 24, 2021
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Stephen A. Rys, Dale W. Malik, Nadia Morris
  • Patent number: 11019119
    Abstract: Techniques for a web-based live broadcast in a network community are described herein. The disclosed techniques include a plurality of hosts each configured to capture content data using a HTML5 browser and transmit the content data via the HTML5 browser to a gateway server based on a WebRTC protocol; a gateway server configured to receive the content data from each of the plurality of hosts, convert the content data into streaming media data in a predetermined format, and transmit the streaming media data to a content distributor based on RTMP; and a plurality of clients each configured to receive the streaming media data from the content distributor based on HTTP, convert the predetermined format of the streaming media data into a format corresponding to each client and displayable on a corresponding client, and display the streaming media data.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: May 25, 2021
    Assignee: Shanghai Bilibili Technology Co., Ltd.
    Inventor: Jun Jiang
  • Patent number: 11010028
    Abstract: A method for applying an always-on interface includes: displaying a first always-on interface, wherein the first always-on interface is an interface displayed when a terminal is in the always-on display state, and the first always-on interface includes a target control which is used to call out a target function item; receiving a first selection operation on the target control; and displaying an intermediate state interface according to the first selection operation, and the intermediate state interface is an interface displayed when the terminal is in the half always-on display state.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: May 18, 2021
    Assignee: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
    Inventor: Haixu Lu
  • Patent number: 10999643
    Abstract: Methods and display devices are provided for switching subtitles that are displayed on a screen. Switching subtitles includes storing, from a first cache and into a second cache, a second subtitle(s) synchronized with a first subtitle, where the first cache stores multi-language subtitles obtained by decoding a video file. The first subtitle is displayed in synchronization with video data in the video file, while the first subtitle and each of the second subtitle(s) have a same start time and a same end time, even while corresponding to different languages. In response to receiving a subtitle switching instruction that includes information associated with a first target language while the first subtitle is being displayed, one of the second subtitle(s) corresponding to the first target language from the second cache is read as a third subtitle, and the third subtitle is displayed.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: May 4, 2021
    Assignee: HISENSE VISUAL TECHNOLOGY CO., LTD.
    Inventors: Yungang Wang, Yongbang Wei
  • Patent number: 10997953
    Abstract: Provided herein is technology for displaying, reposition, and/or formatting graphics on a display. The technology includes receiving a graphics stream in a first playout format that includes a first display resolution and first display layout. The technology also includes determining a second playout format that includes a second display resolution and a second display layout. The technology further determines an area of importance within the first display layout given the first display layout, second display resolution, and second display layout. A preferred position within the second display layout is determined so that the preferred position is a location in the second display layout that is in a relatively similar location as the area of importance in the first display layout. The first playout format is converted into the second playout format using the area of importance and preferred position. Finally, the graphics stream is displayed in the second playout format.
    Type: Grant
    Filed: February 21, 2017
    Date of Patent: May 4, 2021
    Assignee: VIACOM INTERNATIONAL INC.
    Inventors: Gregg William Riedel, Jeff Hess, Scott Danahy
  • Patent number: 10992990
    Abstract: Methods and apparatuses for presenting menus to DVR users and users of other media playback devices are described. After a DVR (or other media device) has finished playing a recorded television program (or other content), or in response to other specified events, the DVR presents a screen which comprises a menu. In addition to or in alternative to “save” and “delete” options, the menu comprises one or more options. Each of these other options may correspond to a separate item. For example, a user's selection of such an option may cause the DVR to display or play certain content on the user's television set. Additionally, or alternatively, these other options, when selected by a user, may cause the DVR to display a user interface through which the user can actually interact with content, such as an item that was featured or referenced in the television program that the user was just watching.
    Type: Grant
    Filed: April 25, 2016
    Date of Patent: April 27, 2021
    Assignee: TIVO SOLUTIONS INC.
    Inventors: James Barton, Paul Stevens, David Sandford, Robin Hayes, Margret Schmidt, Bruce Klein
  • Patent number: 10986314
    Abstract: An OSD information generation camera according to an embodiment of the present disclosure includes: an image recorder configured to receive an image signal and generate an image; a controller configured to extract basic information for generating OSD information from the image; an OSD information generator configured to generate the OSD information based on the basic information extracted by the controller; and a communicator configured to individually transmit data of the OSD information and the image to an external destination.
    Type: Grant
    Filed: October 20, 2016
    Date of Patent: April 20, 2021
    Assignee: HANWHA TECHWIN CO., LTD.
    Inventors: Jung Min Shim, Sung Bong Cho
  • Patent number: 10924785
    Abstract: To allow subtitle bitmap data to be favorably superimposed onto video data on the reception side. A video stream having progressive video data is generated. A subtitle stream having progressive subtitle bitmap data is generated. A container including the video stream and the subtitle stream, in a predetermined format is transmitted. For example, the progressive subtitle bitmap data divided into top-field subtitle bitmap data and bottom-field subtitle bitmap data or not divided is present in the subtitle stream.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: February 16, 2021
    Assignee: SONY CORPORATION
    Inventor: Ikuo Tsukagoshi
  • Patent number: 10891489
    Abstract: Disclosed are a method, a system, and a non-transitory computer readable medium for identifying captions in captioned video. A method includes receiving audio and video content from a caption device where the video content includes captioned text, extracting frames of video from the received video content where the frames of video include captioned text, recognizing text from the captioned text in the extracted frames of video, and generating a descriptive textual file including timing information for the recognized text and timing information for the captioned text.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: January 12, 2021
    Assignee: NEDELCO, INCORPORATED
    Inventors: Seth Merrill Marks, Jeffery F. Knighton, Charles Scott Toller
  • Patent number: 10893336
    Abstract: Methods, systems, and computer readable media can be operable to facilitate the generation and output of customized caption data, the caption data being customized for a specific client device. Caption data associated with requested content can be edited at a customer premise equipment device according to caption settings associated with the requesting client device. Caption settings associated with the requesting client device can be determined based upon user-input or caption settings previously used for the requesting client device.
    Type: Grant
    Filed: February 6, 2015
    Date of Patent: January 12, 2021
    Assignee: ARRIS ENTERPRISES LLC
    Inventors: Lakshmi Arunkumar, Krishna Prasad Panje
  • Patent number: 10878721
    Abstract: A hearing user's device for communicating with a hearing impaired assisted user using an assisted user's device that includes a speaker and a display screen for broadcasting a hearing user's voice signal and presenting captioned text associated with the hearing user's voice signal to the assisted user, respectively, the hearing user's device comprising a microphone for receiving the hearing user's voice signal as spoken by the hearing user, a display screen and a processor linked to the microphone and the display screen, the processor transmitting the hearing user's voice signal to the assisted users device, the processor presenting a quality indication of the captioned text presented to the assisted user via the assisted users device for consideration by the hearing user.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: December 29, 2020
    Assignee: ULTRATEC, INC.
    Inventors: Robert M Engelke, Kevin R Colwell, Christopher Engelke
  • Patent number: 10795932
    Abstract: Disclosed is a method and apparatus for generating a title and a keyframe of a video. According to an embodiment of the present disclosure, the method includes: selecting a main subtitle by analyzing subtitles of the video; selecting the keyframe corresponding to the main subtitle; extracting content information of the keyframe by analyzing the keyframe; generating the title of the video using metadata of the video, the main subtitle, and the content information of the keyframe; and outputting the title and the keyframe of the video.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: October 6, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jeong Woo Son, Sun Joong Kim, Won Joo Park, Sang Yun Lee
  • Patent number: 10791448
    Abstract: The disclosed system provides a facility to enable an emergency messaging session on a locked mobile device. In response to receiving a request on a locked mobile device to initiate an emergency messaging session, the system displays an interface that may enable a user of the mobile device to select a type of emergency and/or enter a customized message to be transmitted to an emergency services provider. The displayed interface additionally may include options to automatically or manually send specified types of information to an emergency responder, such as a physical location of the mobile device or medical, personal contact, or residence information of the user of the mobile device. While an emergency messaging session is in progress, the disclosed system may continually send updated information to the selected emergency provider, such as a location of the mobile device or biometric information associated with a user of the mobile device.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: September 29, 2020
    Assignee: T-Mobile USA, Inc.
    Inventor: Vishesh Raj Suxena
  • Patent number: 10764631
    Abstract: A method of providing a synchronized secondary audio track via a mobile device. The method includes: receiving, at a mobile device, a request from a user to receive a secondary audio track that corresponds with a primary audio track of an audio-visual (AV) program which is presented to the user; receiving the secondary audio track at the mobile device; receiving at the mobile device a playback-control cue; and in response to receiving the playback-control cue, outputting audio data of the secondary audio track so that the audio data is synchronized with the primary audio track.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: September 1, 2020
    Assignee: DISH Network L.L.C.
    Inventor: Kevin Yao
  • Patent number: 10740442
    Abstract: A system, method and various software tools enable a video hosting website to automatically identified unlicensed audio content in video files uploaded by users, and initiate a process by which the user can replace the unlicensed content with licensed audio content. An audio replacement tool is provided that enables the user to permanently mute the original, unlicensed audio content of a video file, or select a licensed audio file from a collection of licensed audio, and insert the selected in place of the original audio. Where a video file includes unlicensed audio, the video hosting website provides access to video files to a client device, along with an indication to the client device to mute the audio during playback of the video.
    Type: Grant
    Filed: July 19, 2016
    Date of Patent: August 11, 2020
    Assignee: GOOGLE LLC
    Inventors: Franck Chastagnol, Vijay Karunamurthy, Matthew Liu, Christopher Maxcy
  • Patent number: 10681343
    Abstract: Concepts and technologies disclosed herein are directed to closed caption corruption detection and reporting. In accordance with one aspect disclosed herein, a system can ingest a digital video channel bitstream. The system can locate a digital closed caption flag in the digital video channel bitstream. The digital closed caption flag can indicate that a digital closed caption content packet is present within the digital video channel bitstream. The system can determine that at least a portion of the digital closed caption content packet cannot be rendered to display closed caption content associated with at least the portion of the digital closed caption content packet. The system can instantiate an alert based on the determination that at least the portion of the digital closed caption content packet cannot be rendered to display the closed caption content associated with at least the portion of the digital closed caption content packet.
    Type: Grant
    Filed: September 15, 2017
    Date of Patent: June 9, 2020
    Assignee: AT&T Intellectual Property I, L.P.
    Inventor: Todd Jones
  • Patent number: 10657406
    Abstract: Devices, computer-readable media, and methods for exporting text captured from a video program presented on a display screen using optical character recognition to an alternate destination are disclosed. For example, a processor may present a video program via a display screen, receive, from a control device associated with the display screen, a request to capture text from the video program, and identify text from a frame of the video program via optical character recognition. When the text from the frame is identified via optical character recognition, the processor may present, via the display screen, the text in a selectable format, receive, from the control device, a selection of at least a portion of the text that is presented in the selectable format, and send the at least the portion of the text to an alternate destination in accordance with the selection of the at least the portion of the text.
    Type: Grant
    Filed: February 2, 2017
    Date of Patent: May 19, 2020
    Assignee: The DIRECTV Group, Inc.
    Inventor: Hiten Engineer
  • Patent number: 10645460
    Abstract: In one embodiment, a method includes retrieving, from one or more data stores, a script including multiple text strings, where the script is associated with a user of a social-networking system. The method also includes capturing an incoming media stream including audio data corresponding to vocal expression by the user, where the media stream is transmitted to the social-networking system for broadcast and identifying, using a speech recognition process, one or more words in the vocal expression corresponding to a text string of the script. The method also includes providing the corresponding text string for display in conjunction with a subsequent text string of the script.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: May 5, 2020
    Assignee: Facebook, Inc.
    Inventor: Debashish Paul
  • Patent number: 10631064
    Abstract: Systems and methods are provided herein for adapting, when multiple users are consuming a media asset on a primary device, the size of subtitles presented on the primary device upon determining that a user located closer to the primary device (i.e., first user) than a user farthest from the primary device (i.e., second user) is discontent with the size of the subtitles. The media guidance application may determine that there is a secondary device, associated with and in the vicinity of the second user, that is suitable for displaying subtitles. The media guidance application may, upon determining that the second user is currently not using the secondary device, present subtitles for the second user on the secondary device. The media guidance application may then adjust the size of the subtitles presented with the media asset on the primary device to a size more suited for the first user.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: April 21, 2020
    Assignee: Rovi Guides, Inc.
    Inventor: Arun Sreedhara, Sr.
  • Patent number: 10628524
    Abstract: An information input method and device are provided, which are related to the technical field of input method. The method comprises: acquiring candidates corresponding to an encoded string that is inputted, and determining a target query string from the candidates; searching the target query string, and obtaining a corresponding search result; extracting corresponding target content from the search result; and when an information-ending interface of a first chat client is detected to be triggered, using a first template provided by the first chat client to reconstruct the target content into first template information recognizable by the first chat client, and sending the first template information to a second chat client.
    Type: Grant
    Filed: August 18, 2015
    Date of Patent: April 21, 2020
    Assignee: BEIJING SOGOU TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Yujing Qiao, Bin Chen, Dong Wang, Hao Yu, Kuo Zhang
  • Patent number: 10567834
    Abstract: The various implementations described herein describe using an audio stream to identify metadata associated with a currently playing television program. In one aspect, a method is performed at a computing device having processors and memory storing programs to be executed by the processors. The computing device obtains audio description data of a video stream for a media program. The audio description data comprises a synchronized audio narrative describing what is happening visually in the media program. The computing device identifies a set of information items including one or more words in the audio description data. The computing device transmits the words to a server. After the transmitting, the computing device obtains from the server information of content files related to the words. In response to obtaining the information, the computing device causes the information to be displayed on an electronic device that is distinct from the computing device.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: February 18, 2020
    Assignee: GOOGLE LLC
    Inventors: Steven Keith Hines, Timbo Drayson
  • Patent number: 10542360
    Abstract: An audio processing device has an information extractor that extracts identification information from a first audio signal in a first frequency band that includes an audio component of a sound for reproduction and an audio component including the identification information of the sound for reproduction and a signal processor that generates a second audio signal that includes the identification information extracted by the information extractor and that is in a second frequency band higher than the first frequency band, with a sound represented by the second audio signal being emitted from a sound emission device.
    Type: Grant
    Filed: December 4, 2017
    Date of Patent: January 21, 2020
    Assignee: YAMAHA CORPORATION
    Inventors: Shota Moriguchi, Yuki Seto
  • Patent number: 10523558
    Abstract: A packet-based video network includes: plural packetized video data nodes; a packet switch configured to switch from one of video packet routes to another of video packet routes; and a video synchronizer configured to synchronize the video frame periods of at least nodes acting as packetized video data sources; wherein: each node acting as a packetized video data source is configured to launch onto the network packetized video data such that, for at least video frame periods adjacent to a switching operation: the node launches onto the network packetized video data required for decoding that frame during a predetermined active video data portion of the video frame period, and the node does not launch onto the network packetized video data required for decoding that frame during a predetermined remaining portion of the video frame period; and the switching operation is implemented during the predetermined remaining portion.
    Type: Grant
    Filed: September 1, 2015
    Date of Patent: December 31, 2019
    Assignee: Sony Corporation
    Inventors: David Berry, Jian-Rong Chen, Stephen Olday
  • Patent number: 10511880
    Abstract: Disclosed is a computer-implemented method of triggering an instance of companion software to perform an expected action related to a piece of media content during a delivery of that media content by a media device to a content consuming user, the method comprising: the instance of the companion software receiving a synchronization signal transmitted when, in delivering the media content, the media device reaches a reference point in the media content, wherein the synchronisation signal conveys a time instant of that reference point; measuring a current elapsed time from the time instant of the reference point; accessing computer storage holding an association of the expected action with a time instant of a trigger point in the media content; and triggering the expected action when the current elapsed time substantially matches the time instant of the trigger point.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: December 17, 2019
    Assignee: PIKSEL, INC.
    Inventors: Philip Shaw, Miles Weaver
  • Patent number: 10497086
    Abstract: Methods of expressing animation in a data stream are disclosed. In one embodiment, a method of expressing animation in a data stream includes defining animation states in the data stream with each state having at least one property such that properties are animated as a group. The animation states that are defined in the data stream may be expressed as an extension of a styling sheet language. The data stream may include web content and the defined animation states.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: December 3, 2019
    Assignee: Apple Inc.
    Inventors: Peter Graffagnino, Dave Hyatt, Richard Blanchard, Kevin Calhoun, Giles Drieu, Maciej Stachowiak, Don Melton, Darin Adler
  • Patent number: 10499170
    Abstract: An audio processing device has an information extractor that extracts identification information from a first audio signal in a first frequency band that includes an audio component of a sound for reproduction and an audio component including the identification information of the sound for reproduction and a signal processor that generates a second audio signal that includes the identification information extracted by the information extractor and that is in a second frequency band higher than the first frequency band, with a sound represented by the second audio signal being emitted from a sound emission device.
    Type: Grant
    Filed: December 4, 2017
    Date of Patent: December 3, 2019
    Assignee: YAMAHA CORPORATION
    Inventors: Shota Moriguchi, Yuki Seto
  • Patent number: 10475170
    Abstract: Systems, methods, and computer readable media to improve the operation of a display system are disclosed. Techniques disclosed herein selectively darken a region of an image so that when text or other information is rendered into that region, the contrast between the text or other information and the underlying image in that area is sufficient to ensure the text or other information is visible and readable. In one embodiment, a region into which information is to be rendered may be combined or blended with tone mapped values of those same pixels in accordance with a given function, where the function gives more weight to the tone mapped pixel values the closer those pixels are to the midline of the region and more weight to untone-mapped image pixel values the further those pixels are from the midline of the region.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: November 12, 2019
    Assignee: Apple Inc.
    Inventors: Ian C. Hendry, John C. Gnaegy
  • Patent number: 10467916
    Abstract: A method of, and system for, assisting interaction between a user and at least one other human, which includes receiving (202) action data describing at least one action performed by at least one human. The action data is decoded (204) to generate action-meaning data and the action-meaning data is used (206) to generate (208) user response data relating to how a user should respond to the at least one action.
    Type: Grant
    Filed: April 26, 2011
    Date of Patent: November 5, 2019
    Inventor: Jonathan Edward Bishop
  • Patent number: 10469729
    Abstract: The present technology allows for easily checking whether a moving part is in focus. A video signal at a first frame rate is acquired from a captured video signal at a second frame rate N times higher than the first frame rate. N is an integer larger than or equal to two. Each frame of the captured video signal at the second frame rate is filtered in horizontal and vertical high pass filter processes so that an edge signal is detected. An edge signal corresponding to each frame at the first frame rate is generated in accordance with the edge signal of each frame. Synthesizing the generated edge signal onto the video signal at the first frame rate provides a video signal for a viewfinder display.
    Type: Grant
    Filed: December 17, 2014
    Date of Patent: November 5, 2019
    Assignee: SONY CORPORATION
    Inventor: Koji Kamiya
  • Patent number: 10455286
    Abstract: Video content is protected using a digital rights management (DRM) mechanism, the video content having been previously encrypted and compressed for distribution, and also including metadata such as closed captioning data, which might be encrypted or clear. The video content is obtained by a system of a computing device, the metadata is extracted from the video content and provided to a video decoder, and the video content is provided to a secure DRM component. The secure DRM component decrypts the video content and provides the decrypted video content to a secure decoder component of a video decoder. As part of the decryption, the secure DRM component drops the metadata that was included in the obtained video content. However, the video decoder receives the extracted metadata in a non-protected environment and thus is able to provide the extracted metadata and the decoded video content to a content playback application.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: October 22, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yongjun Wu, Balachandar Sivakumar, Shyam Sadhwani
  • Patent number: 10419818
    Abstract: Aspects of the subject disclosure may include, for example, generating narrative descriptions corresponding to visual features, visual events, and interactions there between for media content, where the narrative descriptions are associated with time stamps of the media content, and presenting the media content and an audio reproduction of the narrative descriptions, wherein the audio reproduction is synchronized to video of the media content according to the time stamps. Other embodiments are disclosed.
    Type: Grant
    Filed: August 21, 2017
    Date of Patent: September 17, 2019
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Raghuraman Gopalan, Lee Begeja, David Crawford Gibbon, Zhu Liu, Amy Ruth Reibman, Bernard S. Renger, Behzad Shahraray, Eric Zavesky
  • Patent number: 10372742
    Abstract: Disclosed is an apparatus and method for tagging a topic to content. The apparatus may include an unstructured data-based topic generator configured to generate a topic model including an unstructured data-based topic based on content and unstructured data, a viewer group analyzer configured to analyze a characteristic of a viewer group including a viewer of the content based on a social network of the viewer and viewing situation information of the viewer, a multifaceted topic generator configured to generate a multifaceted topic based on the topic model and the characteristic of the viewer group, a content divider configured to divide the content into a plurality of scenes, and a tagger configured to tag the multifaceted topic to the scenes.
    Type: Grant
    Filed: August 31, 2016
    Date of Patent: August 6, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jeong Woo Son, Sun Joong Kim, Won Joo Park, Sang Yun Lee, Won Ryu, Sang Kwon Kim, Seung Hee Kim, Woo Sug Jung
  • Patent number: 10367913
    Abstract: Methods, systems and devices for characterizing user viewing behavior are described. A method of tracking user viewing includes presenting media content to a user via a media device, the media content including closed-caption information, and determining that a user has initiated, at a first time, a user interface event (e.g., a “trick-mode” event) that modifies a playback rate of the media content. An offset time is determined, corresponding to a difference between the first time and a second time corresponding to an occurrence of a unique string within the closed-caption information Viewing behavior of the user is then characterized based on the media content, the offset time relative to the unique string, and an event type associated with the user interface event.
    Type: Grant
    Filed: May 28, 2015
    Date of Patent: July 30, 2019
    Assignee: DISH Technologies L.L.C.
    Inventor: Steven Michael Casagrande
  • Patent number: 10362468
    Abstract: Embodiments of the invention provide apparatuses, systems and methods for distributing public information. For example, some embodiments of the invention provide methods for determining an appropriate set of addresses to which to distribute an alert. One such exemplary method comprises maintaining a directory of alert gateways. The directory can comprise a plurality of directory entries, and each directory entry can be associated with a particular alert gateway. Each directory entry can also comprise at least one gateway characteristic associated with that alert gateway. In some cases, a gateway characteristic can include information to enable the alert distribution device to determine whether a given alert should be transmitted to the alert gateway.
    Type: Grant
    Filed: June 12, 2013
    Date of Patent: July 23, 2019
    Assignee: CenturyLink Intellectual Property LLC
    Inventors: Bruce A. Phillips, Steven M. Casey
  • Patent number: 10356451
    Abstract: A broadcast signal transmission method according to one embodiment of the present invention can comprise the steps of: generating service data of a broadcast service, wherein the service data includes a service component included in the broadcast service; generating service signaling information for signaling the broadcast service; and transmitting a broadcast signal including the service data and the service signaling information.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: July 16, 2019
    Assignee: LG ELECTRONICS INC.
    Inventors: Sejin Oh, Jongyeul Suh, Soojin Hwang
  • Patent number: 10331661
    Abstract: A method includes identifying, at a computing device, multiple segments of video content based on a context sensitive term. Each segment of the multiple segments is associated with captioning data of the video content. The method also includes determining, at the computing device, first contextual information of a first segment of the multiple segments based on a set of factors. The method further includes comparing the first contextual information to particular contextual information that corresponds to content of interest. The method further includes in response to a determination that the first contextual information matches the particular contextual information, storing a first searchable tag associated with the first segment.
    Type: Grant
    Filed: October 23, 2013
    Date of Patent: June 25, 2019
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Stephen A. Rys, Dale W. Malik, Nadia Morris
  • Patent number: 10334325
    Abstract: In one aspect, an example method involves accessing data representing a program schedule of a media program, wherein the program schedule comprises first text. The method also includes selecting second text from among the first text. The method further includes transmitting, via a communication network, an instruction configured to cause the selected second-text to be added to an electronic dictionary of a closed-captioning generator.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: June 25, 2019
    Assignee: TRIBUNE BROADCASTING COMPANY, LLC
    Inventor: Hank J. Hundemer
  • Patent number: 10313741
    Abstract: A transmitting apparatus including circuitry configured to generate caption data corresponding to content data and having elements defined in Extensible Markup Language (XML), and output the content data and the generated caption data to a reproducing device.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: June 4, 2019
    Assignee: SONY CORPORATION
    Inventor: Kouichi Uchimura