Including Teletext Decoder Or Display Patents (Class 348/468)
-
Publication number: 20090073314Abstract: When generating animation content as summary content for a digital broadcast program, the timing for switching of animation images for display is controlled appropriately. Subtitle character string extraction means for extracting a subtitle character string from subtitle data contained in digital broadcast signals, still image extraction means for extracting one still image corresponding to the subtitle character string, and summary content generation means for generating summary content to display the extracted subtitle character strings together with the corresponding extracted still images, are provided; the summary content generation means decides the timing for switching display of the plurality of subtitle character strings and still images comprised by the summary content, based on the subtitle character strings.Type: ApplicationFiled: September 16, 2008Publication date: March 19, 2009Applicant: KDDI CORPORATIONInventors: Toshiaki Uemukai, Kazunori Matsumoto, Fumiaki Sugaya
-
Publication number: 20090073315Abstract: Characters represented within a frame of a television presentation are identified. A pattern formed by a subset of the characters is identified if the pattern is indicative of an addressing datum. A provision is made for a selection of characters that form the pattern indicative of the addressing datum. In one embodiment, a web page is displayed upon a selection of characters that form a pattern indicative of a uniform resource locator for the web page.Type: ApplicationFiled: November 21, 2008Publication date: March 19, 2009Applicant: JLB VENTURES LLCInventor: Dan Kikinis
-
Publication number: 20090073313Abstract: A method for controlling a television equipped with a camera sensor is performed as follows. A background image is captured by the camera sensor. After entering a control mode, functional blocks are displayed on the television, and a user image is captured by the camera sensor and shown on the television. A control command is based on the user image, the functional block and the background image, and the television is controlled by the control command. Accordingly, the control functions of the traditional remote controller can be replaced by analysis and comparison of the user image and the background image, and therefore, it is more convenient for the user if the remote controller is absent.Type: ApplicationFiled: September 14, 2007Publication date: March 19, 2009Applicant: HIMAX TECHNOLOGIES LIMITEDInventor: Kai Min Liu
-
Patent number: 7502072Abstract: For changing setting of caption output format, a TV receiver sequentially outputs, to a monitor; a font setting image having options of two parameters, font type and size, in matrix; a color setting image having options of foreground and background colors in matrix; an edge setting image having options of edge type and color in matrix; and an opacity setting image having options of foreground and background opacities in matrix, and allows user to select the options of each two parameters based on each setting image, and further changes the setting image to be output to the monitor, depending on whether “Transparent” is selected as the foreground and/or background color. The TV receiver excludes, from each setting image, an option combination of two options, one causing loss of discriminability of the other, to make impossible selection of such option combination. The user can quickly complete intended setting without redundant procedures.Type: GrantFiled: July 25, 2005Date of Patent: March 10, 2009Assignee: Funai Electric Co., Ltd.Inventors: Takehiro Onomatsu, Toshihiro Takagi
-
Publication number: 20090051811Abstract: A digital broadcasting system and a data processing method are disclosed. The method includes, receiving a broadcast signal in which mobile service data and main service data are multiplexed, extracting transmission parameter signaling information and fast-information-channel signaling information from a data group contained in the received mobile service data, parsing first program table information, which describes virtual channel information and a service of an ensemble acting as a virtual channel group of the received mobile service data, using the fast-information-channel signaling information, parsing second program table information including a data chunk acting as data-broadcasting contents of the mobile service data, and providing a data broadcasting service using the data broadcasting contents of the parsed second program table information.Type: ApplicationFiled: August 22, 2008Publication date: February 26, 2009Inventors: Hui Sang Yoo, In Hwan Choi, Chul Soo Lee, Jae Hyung Song, Min Sung Kwak
-
Publication number: 20090040377Abstract: A video processing apparatus performs a subtitle detection process for each frame in a video signal, wherein a two-step edge determining unit performs primary determination of a plurality of small blocks according to a first determination standard associated with edges, and performs a secondary determination of a plurality of large blocks according to a second determination standard associated with the presence of small blocks for which the first determination was satisfied.Type: ApplicationFiled: June 19, 2006Publication date: February 12, 2009Applicant: Pioneer CorporationInventors: Makoto Kurahashi, Takeshi Nakamura, Hajime Miyasato
-
Publication number: 20090040378Abstract: An information display apparatus includes a display device configured to display a video, a speech detection unit configured to detect a playback state of a playback speech, a closed caption display unit configured to generate character information associated with the playback speech and display it on the display device together with the video, and a closed caption display unit configured to carry out a changing control for changing according to the detected playback state a display state of the character information that is displayed on the display device by the closed caption display unit.Type: ApplicationFiled: October 3, 2008Publication date: February 12, 2009Inventors: Kohei Momosaki, Kazuhiko Abe, Yasuyuki Masai, Makoto Yajima, Koichi Yamamoto, Munehiko Sasajima
-
Patent number: 7487527Abstract: An interactive television program guide is provided. The interactive television program guide provides a user with the opportunity to select a language for playing television programming and displaying program guide text. Television program audio in the desired language may be obtained from a SAP or digital audio track and played in the selected language. Television related information in the desired language may be obtained from a digital track. If television program audio or related information is not provided in the selected language, the program guide may use a default language. The program guide may coordinate program guide display screen text with languages available for television programs when the programs are broadcast to users.Type: GrantFiled: February 13, 2006Date of Patent: February 3, 2009Assignee: United Video Properties, Inc.Inventors: Michael D Ellis, W. Benjamin Herrington, Steven C Williamson, Kevin B Easterbrook, Joshua A Rosenthol, David M Rudnick
-
Patent number: 7487096Abstract: A method for automatically enabling closed captioning in video conferencing when a heavy accent is detected from a current speaker is provided. Language background and/or ethnicity information is received as a user preference. An acceptable accent level is determined according to the user preference. An audio signal of a speaker speaking in a language is received. A pronunciation of the speaker in the audio signal is compared with standard pronunciation for the language. An accent level of the speaker is determined, and the accent level of the speaker is compared to the acceptable accent level. If the comparison determines that the accent level of the speaker does not comply with the acceptable accent level, closed captioning in enabled for the audio signal. If the comparison determines that the accent level of the speaker complies with the acceptable accent level, closed captioning is not enabled for the audio signal.Type: GrantFiled: February 20, 2008Date of Patent: February 3, 2009Assignee: International Business Machines CorporationInventors: Susan M. Cox, Janani Janakiraman, Fang Lu
-
Patent number: 7477320Abstract: A method and system for decoding and storing encoded control data delivered via the horizontal overscan area of a video signal. An interactive device such as a toy performs behavior defined by control data that can be encoded into a video signal. The toy is equipped with a decoder for extracting data from the horizontal overscan portion of the video signal, and a non-volatile memory that permits the control data to be stored for use after the video signal is no longer being received. The control data are delivered as a series of words that include genus codes and sequence codes. Genus codes identify the specific toy to which the word is directed, as more than one toy may receive the video signal. Error grading is used to minimize the effect of such signal deterioration, so the toy will replace previously received words if newer words are of higher quality.Type: GrantFiled: May 11, 2005Date of Patent: January 13, 2009Assignee: Buresift Data Ltd. LLCInventors: Craig S. Ranta, Jeffrey M. Alexander, Harjit Singh
-
Publication number: 20090009661Abstract: There is provided a captioned still picture contents producing technique capable of opening caption broadcasting contents being a closed caption and synthesizing it with a still picture obtained from a TV video to automatically produce new captioned still picture contents. In a captioned still picture contents producing system, a captioned video signal generating apparatus generates a captioned video signal, and a still picture contents producing apparatus produces captioned still picture contents from the captioned video signal. The captioned video signal generating apparatus receives the original video signal and caption signal, and generates a control signal based on whether or not the caption is a real time caption. Then, the captioned video signal generating apparatus synthesizes the caption signal and the video signal and inserts the control signal to a predetermined position to generate a captioned video signal.Type: ApplicationFiled: July 22, 2005Publication date: January 8, 2009Inventors: Shizuo Murakami, Kouichi Okawara, Takashi Tanaka, Hideo Watanabe
-
Patent number: 7466803Abstract: A network method for using a network telephone voice-mail service, by which a caller may leave a voice-message that includes the identification of an attachment, which may include, as examples only, audio, video, text, programs, spreadsheets and graphic attachments. A video, text, spreadsheet or graphic attachment may be converted to an audible attachment to the voice-mail at the caller's or the voice-mail subscriber's request. Such entries may be made, after receiving an automated prompt for leaving an attachment identifier or conversion request, audibly or by using a keypad entry. A network method is also provided for using a network telephone voice-mail service, by which the voice-mail service may detect an attachment to a voice-mail message and provide access to the attachment to the voice-mail message.Type: GrantFiled: March 19, 2007Date of Patent: December 16, 2008Assignee: AT&T Corp.Inventors: Frederick Murray Burg, John F. Lucas, Vivian A. Pressley-Harris
-
Publication number: 20080303943Abstract: In a device for generating a digest of a television broadcast program containing subtitle information, a character number calculation section calculates, based on the subtitle information, a character number of a subtitle displayed in each of segments provided at regular intervals. A digest scene specifying section compares the calculated character number with a threshold and specifies, as one or more digest scenes of the television broadcast program, one or more segments in which the calculated character number is larger than the threshold.Type: ApplicationFiled: April 21, 2008Publication date: December 11, 2008Inventor: Tatsuhiko NUMOTO
-
Publication number: 20080303944Abstract: A television receiver 100 receives, with a tuner 11, a television broadcast signal to which text data indicating emergency information is added, and displays an image on the basis of the received television broadcast signal. In the television receiver 100, a TS decoder 12 separates the text data from the received television broadcast signal, and temporarily stores the text data into a service information processor 15. Then, the separated text data is multiplexed with image data by a multiplexing processor 16 and outputted to a display module 30. In accordance with a text data display switching program 302, when channels are switched, a controller 21 issues an instruction to the multiplexing processor 16 not to display alarm text in image data that is displayed after the channels are switched.Type: ApplicationFiled: May 31, 2008Publication date: December 11, 2008Applicant: Funai Electric Co., Ltd.Inventor: Hiroaki Shibahara
-
Publication number: 20080303945Abstract: A storage medium includes text-based subtitle data including style information for use with an apparatus and a method of playing back the storage medium. The storage medium includes moving image data, and subtitle data for providing a subtitle for the moving image data. The subtitle data is recorded based on a text to be separated from the moving image data and includes information used to select or change an output style of the subtitle. Accordingly, the subtitle can be output using style information selected by a user, and a style in which a subtitle is output can be changed.Type: ApplicationFiled: August 18, 2008Publication date: December 11, 2008Applicant: Samsung Electronics Co., Ltd.Inventors: Man-seok KANG, Kil-soo JUNG
-
Publication number: 20080303942Abstract: Caption boxes which are embedded in video content can be located and the text within the caption boxes decoded. Real time processing is enhanced by locating caption box regions in the compressed video domain and performing pixel based processing operations within the region of the video frame in which a caption box is located. The captions boxes are further refined by identifying word regions within the caption boxes and then applying character and word recognition processing to the identified word regions. Domain based models are used to improve text recognition results. The extracted caption box text can be used to detect events of interest in the video content and a semantic model applied to extract a segment of video of the event of interest.Type: ApplicationFiled: December 19, 2007Publication date: December 11, 2008Inventors: Shih-Fu Chang, Dongqing Zhang
-
Patent number: 7463308Abstract: A data slicer circuit is disclosed which comprises a control circuit to output a digital signal that increases or decreases by a constant value difference depending on the level of an input signal when the input signal is sampled at a given frequency; a conversion circuit to convert the digital signal to an analog signal; and a comparison circuit to compare the video signal with the analog signal, the comparison circuit outputting the result of the comparison as the input signal to the control circuit, wherein the analog signal corresponding to the result of the comparison of the comparison circuit is used as a slice level for separating the data from the video signal.Type: GrantFiled: September 29, 2004Date of Patent: December 9, 2008Assignee: Sanyo Electric Co., Ltd.Inventors: Shinichi Yamasaki, Masanori Okubayashi
-
Patent number: 7463311Abstract: A method and system for including non-graphic data in an analog video output signal of a set-top box. Graphics data is generated from and represents non-graphic data using software or firmware that is executed by a processor, system-on-a-chip integrated circuit, or some other component of the set-top box. The graphics data is alpha blended with video data that is extracted from a digital television signal. This alpha blending results in an alpha blended digital video signal. The alpha blended digital video signal is converted into the analog video output signal by a digital encoder function, which is preferably included in the system-on-a-chip integrated circuit. The alpha blending is also preferably performed by the alpha blending function of the system-on-a-chip integrated circuit. The analog video output signal can then be input into an analog video recorder.Type: GrantFiled: September 9, 2002Date of Patent: December 9, 2008Assignee: General Instrument CorporationInventors: James Ronald Flesch, Paul Andrew Clancy
-
Publication number: 20080297653Abstract: A storage medium for storing text-based subtitle data including style information, a reproducing apparatus and methods are provided for reproducing text-based subtitle data including style information separately recorded on the storage medium. The storage medium includes: multimedia image data; and text-based subtitle data for displaying subtitles on an image based on the multimedia image data, wherein the text-based subtitle data includes dialog information indicating subtitle contents to be displayed on the image, style information indicating an output style of the dialog information, and partial style information indicating an output style applied to a portion of the dialog information. Accordingly, subtitles can be provided in a plurality of languages without limited to the number of units of subtitle data. In addition, subtitle data can be easily produced and edited. Likewise, an output style of the subtitle data can be changed in a variety of ways.Type: ApplicationFiled: August 13, 2008Publication date: December 4, 2008Applicant: Samsung Electronics Co., Ltd.Inventors: Kil-soo JUNG, Sung-wook PARK
-
Patent number: 7460182Abstract: A video signal processing circuit includes a frame memory to receive and store therein at a first rate each frame of a digital video signal in which additional information is assigned separately to each frame, a frame synchronization unit to read each frame of the digital video signal from the frame memory at a second rate, and a processing unit to assign the additional information to the digital video signal read by the frame synchronization unit, without repeating the additional information when the frame synchronization unit reads a same frame of the digital video signal repeatedly, and without skipping the additional information when the frame synchronization unit reads by skipping a frame of the digital video signal.Type: GrantFiled: May 25, 2005Date of Patent: December 2, 2008Assignee: Fujitsu LimitedInventor: Tetsu Takahashi
-
Publication number: 20080293443Abstract: A subscription-based system provides transcribed audio information to one or more mobile devices. Some techniques feature a system for providing subscription services for currently-generated (e.g., not stored) information (e.g., caption information, transcribed audio) for one or more mobile devices for a live/current audio event. There can be a communication network for communicating to the one or more mobile devices, a transcriber configured for transcribing the event to generate information (e.g., caption information, transcribed audio). Caption data includes transcribed data and control code data. The system includes a subscription gateway configured for live/current transfer of the transcribed data to the one or more mobile devices. The subscription gateway is configured to provide access for the transcribed data to the one or more mobile devices.Type: ApplicationFiled: August 13, 2008Publication date: November 27, 2008Applicant: MEDIA CAPTIONING SERVICESInventor: Richard F. Pettinato
-
Patent number: 7456902Abstract: Characters represented within a frame of a television presentation are identified. A pattern formed by a subset of the characters is identified if the pattern is indicative of an addressing datum. A provision is made for a selection of characters that form the pattern indicative of the addressing datum. In one embodiment, a web page is displayed upon a selection of characters that form a pattern indicative of an uniform resource locator for the web page.Type: GrantFiled: December 4, 2001Date of Patent: November 25, 2008Assignee: JLB Ventures, LLCInventor: Dan Kikinis
-
Publication number: 20080284909Abstract: In one embodiment, a method includes receiving a copy of a multimedia flow from a point along the path of the multimedia flow through a communication network of nodes, receiving metric information associated with the multimedia flow from one or more probes coupled to the flow path and combining the copy of the multimedia flow and at least a portion of the metric information to provide a combined signal.Type: ApplicationFiled: May 16, 2007Publication date: November 20, 2008Inventors: Michael F. Keohane, James A. Clark, Adrian C. Smethurst
-
Publication number: 20080284910Abstract: In a system and method providing a video with closed captioning, a processor may: provide a first website user interface adapted for receiving a user request for generation of closed captioning, the request referencing a multimedia file provided by a second website; responsive to the request: transcribe audio associated with the video into a series of closed captioning text strings arranged in a text file; for each of the text strings, store in the text file respective data associating the text string with a respective portion of the video; and store, for retrieval in response to a subsequent request made to the first website, the text file and a pointer associated with the text file and referencing the text file with the video; and/or providing the text file to an advertisement engine for obtaining an advertisement based on the text file and that is to be displayed with the video.Type: ApplicationFiled: January 31, 2008Publication date: November 20, 2008Inventors: John Erskine, John Wood, Matthew Gutierrez
-
Patent number: 7450177Abstract: An apparatus and a method that controls a position of a caption. The apparatus includes: a caption processing unit, which identifies a non-signal area (B) including the caption using a control signal, identifies an image data area (A having the same size as the non-signal area (B), on a screen, and displays caption data of the non-signal area (B) on the image data area (A); a controller, which outputs a control signal positioning the captions to be lost in an enlarged presentation mode on the screen; and a signal processing unit, which performs image signal processing so that the caption data is displayed with predetermined images on the image data area (A).Type: GrantFiled: July 30, 2004Date of Patent: November 11, 2008Assignee: Samsung Electronics Co., LtdInventors: Jang-woo Lee, Sang-hak Lee
-
Publication number: 20080273114Abstract: A display reader device that reads a channel displayed on a television set top box channel display consistent with certain embodiments has an array of light sensitive electronic elements receiving light signals from a television set top box's channel display and producing an output signal representative of the light pattern received by the array of light sensitive electronic elements. An image processor converts the output signal representative of the light pattern received by the array of light sensitive elements and converts the output signal to a signal representing the channel number displayed on the display. An output from the image processor carries the signal representing the channel number. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.Type: ApplicationFiled: May 4, 2007Publication date: November 6, 2008Inventors: Robert L. Hardacker, David H. Bessel
-
Patent number: 7446817Abstract: A method and apparatus for detecting text associated with video are provided. The method of detecting the text of the video includes reading a t-th frame (where t is a positive integer) among frames forming the video as a current frame, determining whether there is a text area detected from a previous frame which is a (t?N)-th (where N is a positive integer) frame among the frames forming the video, in the current frame, and upon determining that there is no text area detected from the previous frame in the current frame, detecting the text area in the entire current frame. Upon determining that there is the text area detected from the previous frame in the current frame, the text area is detected from a remaining area obtained by excluding from the current frame an area corresponding to the text area detected from the previous frame. Whether there is a text area in a next frame which is a (t+N)-th frame among the frames forming the video is verified.Type: GrantFiled: February 14, 2005Date of Patent: November 4, 2008Assignee: Samsung Electronics Co., Ltd.Inventors: Cheolkon Jung, Jiyeun Kim, Youngsu Moon
-
Publication number: 20080266449Abstract: The present invention provides a method and system for providing access to information of potential interest to a user. Closed-caption information is analyzed to find related information on the Internet. User interactions with a TV which receives programming including closed-caption information are monitored to determine user interests. The related closed-caption information is analyzed to determine key information therein. The key information is used for searching for information in available resources such as the Internet, and the search results are used to make recommendations to the user about information of potential interest to the user.Type: ApplicationFiled: April 25, 2007Publication date: October 30, 2008Applicant: Samsung Electronics Co., Ltd.Inventors: Priyang Rathod, Mithun Sheshagiri, Phuong Nguyen, Anugeetha Kunjithapatham, Alan Messer
-
Publication number: 20080270134Abstract: A hybrid-captioning system for editing captions for spoken utterances within video includes an editor-type caption-editing subsystem, a line-based caption-editing subsystem, and a mechanism. The editor-type subsystem is that in which captions are edited for spoken utterances within the video on a groups-of-line basis without respect to particular lines of the captions and without respect to temporal positioning of the captions in relation to the spoken utterances. The line-based subsystem is that in which captions are edited for spoken utterances within the video on a line-by-line basis with respect to particular lines of the captions and with respect to temporal positioning of the captions in relation to the spoken utterances. For each section of spoken utterances within the video, the mechanism is to select the editor-type or the line-based subsystem to provide captions for the section of spoken utterances in accordance with a predetermined criteria.Type: ApplicationFiled: July 13, 2008Publication date: October 30, 2008Inventors: Kohtaroh Miyamoto, Noriko Negishi, Kenichi Arakawa
-
Patent number: 7443449Abstract: An information display apparatus includes a display device configured to display a video, a speech detection unit configured to detect a playback state of a playback speech, a closed caption display unit configured to generate character information associated with the playback speech and display it on the display device together with the video, and a closed caption display unit configured to carry out a changing control for changing according to the detected playback state a display state of the character information that is displayed on the display device by the closed caption display unit.Type: GrantFiled: March 29, 2004Date of Patent: October 28, 2008Assignee: Kabushiki Kaisha ToshibaInventors: Kohei Momosaki, Kazuhiko Abe, Yasuyuki Masai, Makoto Yajima, Koichi Yamamoto, Munehiko Sasajima
-
Publication number: 20080259211Abstract: Aspects of the invention are directed to using subtitles for purposes other than viewing them on a display screen overlaid on the sub-title's corresponding content. While watching television with sub-titles activated, a terminal user may be offered a short-cut and/or one or more options which will use a current (or recently occurring) subtitle for other purposes. A user may be given an opportunity to use the subtitle text for sending the subtitle as message, storing the subtitle as a calendar note, etc. In accordance with another embodiment, subtitles of media content running in the background on a media content receiver terminal may be scanned based on configurable criteria input by a user. If there is a match, the user will be notified via an alarm and may change to the media channel on which the match was found.Type: ApplicationFiled: April 23, 2007Publication date: October 23, 2008Applicant: Nokia CorporationInventors: Christian Kraft, Peter Dam Nielsen
-
Publication number: 20080254826Abstract: The present invention relates to a caption data transmission and reception method in digital broadcasting and to a mobile terminal performing the caption data transmission and reception method. The mobile terminal capable of digital broadcast reception can provide a caption service using Binary Format for Scenes (BIFS) data contained in broadcast data.Type: ApplicationFiled: March 11, 2008Publication date: October 16, 2008Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventor: Seong Geun Kwon
-
Publication number: 20080252780Abstract: A captioning evaluation system. The system accepts captioning data and determines a number of errors in the captioning data, as well as the number of words per minute across the entirety of an event corresponding to the captioning data and time intervals of the event. The errors may be used to determine the accuracy of the captioning and the words per minute, both for the entire event and the time intervals, used to determine a cadence and/or rhythm for the captioning. The accuracy and cadence may be used to score the captioning data and captioner.Type: ApplicationFiled: April 16, 2008Publication date: October 16, 2008Inventors: Richard T. Polumbus a/k/a Tad Polumbus, Troy A. Greenwood
-
Publication number: 20080246878Abstract: The invention relates to a device that comprises a memory circuit with memory cells that use floating gate storage transistors, which are conventionally called non-volatile memory cells. A particular embodiment relates to a teletext circuit. A teletext processing circuit comprising a decoder logic circuit and a memory circuit integrated together in an integrated circuit, the memory circuit comprising memory cells for storing teletext page data, the memory cells comprising floating gate storage transistors to store the teletext page data. The page data from memory is used to control the content of displayed teletext images.Type: ApplicationFiled: September 29, 2006Publication date: October 9, 2008Applicant: NXP B.V.Inventors: Johannes Petrus Maria Van Lammeren, Frans Jacob List, Johan Somberg
-
Patent number: 7430016Abstract: A digital cable broadcast receiver and a method for automatically processing caption data of various standards and types, is disclosed. The digital broadcast receiver includes: a demultiplexer for dividing a received broadcast stream into video data, audio data, supplementary information; a controller for determining whether caption data included in the video data is digital caption data or analog caption data on the basis of caption information included in the supplementary information, and outputting a control signal according to a result of the determining; a digital caption decoder for extracting and decoding digital caption data from the video data according to the control signal; and an analog caption decoder for extracting and decoding analog caption data from the video data according to the control signal.Type: GrantFiled: September 17, 2004Date of Patent: September 30, 2008Assignee: LG Electronics Inc.Inventor: Tae Jin Park
-
Publication number: 20080225164Abstract: A digital cable broadcast receiver and a method for automatically processing caption data of various standards and types, is disclosed. The digital broadcast receiver includes: a demultiplexer for dividing a received broadcast stream into video data, audio data, supplementary information; a controller for determining whether caption data included in the video data is digital caption data or analog caption data on the basis of caption information included in the supplementary information, and outputting a control signal according to a result of the determining; a digital caption decoder for extracting and decoding digital caption data from the video data according to the control signal; and an analog caption decoder for extracting and decoding analog caption data from the video data according to the control signal.Type: ApplicationFiled: April 1, 2008Publication date: September 18, 2008Inventor: Tae Jin Park
-
Publication number: 20080218632Abstract: A method of modifying text-based subtitles reproduced with an audio visual (AV) data, a method of decoding text subtitles, a text subtitle decoder for modifying text-based subtitles, and a reproduction apparatus. The method of modifying text subtitles includes receiving source and target words; searching first text subtitle data for the source word and generating second text subtitle data by changing instances of the source word in the first text subtitle data to the target word; generating connection information between the first and second text subtitle data; and upon a reproduction request, selecting the first text subtitle data or the second text subtitle data with reference to the connection information and reproducing the first text subtitle data or the second text subtitle data with the AV data. According to aspects of the present invention, a user may easily modify text subtitles without performing a complicated editing process.Type: ApplicationFiled: December 26, 2007Publication date: September 11, 2008Applicant: Samsung Electronics Co., Ltd.Inventors: Kil-soo JUNG, Sung-wook Park
-
Publication number: 20080180572Abstract: A system and method for enabling access to closed captioning data present in a broadcast stream is disclosed. The technology includes a method for enabling access to closed captioning data present in a broadcast stream. The method includes accessing device data associated with a broadcast stream receiver, wherein the device data indicates whether the broadcast stream receiver is configured to receive a digitized format of closed captioning data or an analog format of closed captioning data. Provided the digitized format of the closed captioning data is not present in the broadcast stream, the method includes ensuring the broadcast stream receiver is configured to access the analog format of the closed captioning data.Type: ApplicationFiled: January 29, 2007Publication date: July 31, 2008Applicant: Microsoft CorporationInventors: Shawn E. Pickett, Edward Goziker, Ross F. Hewit
-
Publication number: 20080166106Abstract: Disclosed herein is an information processing apparatus including: a display control section configured to display on a display section a picture based on a video signal and a caption synchronized with the picture and based on caption information attached to the video signal; and a character string information acquisition section configured to acquire character string information common to the caption information and to information related to music contents stored in a storage section, by comparing the caption information with the related information. The display control section displays on the display section the caption in which the common character string information acquired by the character string information acquisition section is highlighted in a predetermined manner.Type: ApplicationFiled: January 8, 2008Publication date: July 10, 2008Applicant: Sony CorporationInventors: Takeshi Ozawa, Takashi Nomura, Motofumi Itawaki
-
Publication number: 20080158419Abstract: A video display apparatus includes a controller that controls, when performing a two-picture display, other components to composite video signal output from a tuner to be input to a first color decoder via a first correction unit and to composite video signal output from an external input terminal to be input to a second color decoder via a second correction unit.Type: ApplicationFiled: July 11, 2007Publication date: July 3, 2008Applicant: KABUSHIKI KAISHA TOSHIBAInventor: Takuji MATSUDA
-
Publication number: 20080151111Abstract: A broadcast receiving apparatus and a method for storing open caption information in the broadcast receiving apparatus are provided. The broadcast receiving apparatus includes a determiner which determines whether open caption information is displayed on a screen; a detector which detects the open caption information, if it is determined that the open caption information is displayed; and a storage unit which stores the detected open caption information. Therefore, the open caption information may be stored and displayed, and thus user convenience can be enhanced.Type: ApplicationFiled: June 6, 2007Publication date: June 26, 2008Applicant: Samsung Electronics Co., Ltd.Inventor: Su-won SHIN
-
Patent number: 7391470Abstract: An apparatus and method for providing caption information effectively provides information on the status of a caption service for a broadcast signal depending on a user's selection by checking the caption service status of the broadcast signal at a predetermined time interval and storing the caption service status information obtained through the check. The apparatus includes a caption information collecting unit for checking the status of a caption service for a broadcast signal at a predetermined time interval and collecting information on the caption service status, and a caption information processing unit for displaying the collected caption service status information on a display unit.Type: GrantFiled: July 14, 2004Date of Patent: June 24, 2008Assignee: Samsung Electronics Co., Ltd.Inventor: Kwang-won Kim
-
Publication number: 20080129865Abstract: A system and method for rapid subtitling and for alignment of various types of data sequences is provided. In one embodiment, the system includes an input module adapted to receive parameter values from a user, a computer readable memory adapted to store the parameters in a manner so that the stored parameters relate at least one event to at least one data sequence, and an analysis module adapted to extract at least one feature from the data sequence and to adjust the parameters based on the at least one feature extracted from the data sequence. In an alternate embodiment, the system treats user-supplied times as a priori data and adjusts those times using extracted features from concurrent and previously-analyzed data streams.Type: ApplicationFiled: November 5, 2007Publication date: June 5, 2008Inventor: Sean Joseph Leonard
-
Publication number: 20080129866Abstract: A caption detection device including a delay unit which delays a current-frame image to output a previous-frame image, a current feature detection unit which receives the current-frame image to detect a caption feature in each region, a previous feature detection unit which receives the previous-frame image from the delay unit to detect a caption feature in each region, a caption emergence region detection unit which detects a region where the caption emerges based on a temporal change between the feature in each region of the current-frame image and the feature in each region of the previous-frame image, and a caption disappearance region detection unit which detects a region where the caption disappears based on the temporal change between the feature in each region of the current-frame image from the current feature detection unit and the feature in each region of the previous-frame image.Type: ApplicationFiled: November 28, 2007Publication date: June 5, 2008Applicant: KABUSHIKI KAISHA TOSHIBAInventor: Himio Yamauchi
-
Publication number: 20080129864Abstract: A multimedia server distributes closed captioning over a network to a client device running a media player that does not support standardized closed captioning. The multimedia server receives a media stream including closed captioning that is encoded according to a closed captioning standard such as Consumer Electronics Association CEA-608-B or CEA 708-B, Advanced Television Systems Committee ATSC A/53 or the Society of Cable Telecommunications Engineers SCTE 20 and/or SCTE 21. The multimedia server transcodes the closed captioning into a format that is usable by the media player and transmits the transcoded closed captioning to the client device over the network so that the media player can render the closed captioning synchronously with programming content included in the media stream.Type: ApplicationFiled: December 1, 2006Publication date: June 5, 2008Applicant: GENERAL INSTRUMENT CORPORATIONInventors: Christopher J. Stone, Albert Fitzgerald Elcock, Patrick J. Leary, Jeffrey M. Newdeck
-
Publication number: 20080129867Abstract: This invention relates to the field of synchronizing overlay video signals with a television video signal feed, in order to allow the overlay video signals to be mixed with the television video signal feed at the proper time intervals. The synchronization is performed by using a synchronization signal which is encoded into the closed caption line of the television video signal feed.Type: ApplicationFiled: November 13, 2007Publication date: June 5, 2008Inventors: Benjamin Montua, Armin Barbalata
-
Patent number: 7376338Abstract: An information storage medium containing multi-language markup document information, and an apparatus for and a method of reproducing the information storage medium which includes audio/video (AV) data, multiple markup documents which contain text information to be displayed in a selected language with a video picture decoded and reproduced from the AV data, and multi-language markup document information for designating one of the multiple markup documents as a markup document in the selected language. Accordingly, the information storage medium allows the reproducing apparatus to display the text information included in the markup document in the interactive mode in respective multiple languages.Type: GrantFiled: June 10, 2002Date of Patent: May 20, 2008Assignee: Samsung Electronics Co., Ltd.Inventors: Byung-jun Kim, Jung-wan Ko, Hyun-kwon Chung, Bong-gil Bak
-
Patent number: 7369824Abstract: A receiver contains a demodulator system that can deliver a stream of digital data comprising at least a first and a second set of audio data. In a first time interval, a memory is used to store either the first or the second set of audio data. In a subsequent time interval, the demodulator system delivers another stream of digital data. A user can select either the stored audio data or the another stream of digital data. The selected data is converted to analog audio signals.Type: GrantFiled: June 3, 2005Date of Patent: May 6, 2008Inventor: Hark C. Chan
-
Patent number: 7369180Abstract: A method for processing auxiliary information, such as closed caption or teletext data, in a video system enables an increased number of characters to be displayed per line. According to an exemplary embodiment, a video system (100) includes a tuner (10) operative to receive a video signal including auxiliary information representative of a first number of characters to be displayed per line. A memory (13) is operative to store display list data representative of the received auxiliary information. A controller (11) is operative to retrieve the stored display list data in accordance with a format representative of a second number of characters per line, the second number being less than the first number.Type: GrantFiled: January 21, 2003Date of Patent: May 6, 2008Assignee: Thomson LicensingInventor: Mike Xing
-
Publication number: 20080088735Abstract: A Social Media Platform and method are provided wherein contextual content, in real-time, is delivered to a user along with the original content from which the contextual content is derived.Type: ApplicationFiled: September 29, 2006Publication date: April 17, 2008Inventors: Bryan Biniak, Brock Meltzer, Ata Ivanov