Patents by Inventor Raymond Yeung
Raymond Yeung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240172309Abstract: Systems and methods are provided for initiating first instructions to establish a communication session between a user device and a vehicle, wherein the first instructions include a wait time interval and the communication session is associated with a short-range wireless communication protocol. The system and methods may determine a number of unsuccessful attempts for establishing the communication session exceeds a threshold value, and in response to determining that the number exceeds the threshold value, generate second instructions to establish the communication session, wherein the second instructions include a modification to the wait time interval. The second instructions may be initiated to establish the communication session between the user device and the vehicle.Type: ApplicationFiled: November 21, 2022Publication date: May 23, 2024Inventors: Fai Yeung, Leonid Kokhnovych, Zhenxiang Kui, Wei Kuai, Paul Rolfe, Raymond Chan
-
Patent number: 11010860Abstract: An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.Type: GrantFiled: October 8, 2019Date of Patent: May 18, 2021Assignee: Dolby Laboratories Licensing CorporationInventors: Raymond Yeung, Patrick Griffis, Thaddeus Beier, Robin Atkins
-
Patent number: 10944938Abstract: Methods and systems for controlling judder are disclosed. Judder can be introduced locally within a picture, to restore a judder feeling which is normally expected in films. Judder metadata can be generated based on the input frames. The judder metadata includes base frame rate, judder control rate and display parameters, and can be used to control judder for different applications.Type: GrantFiled: September 29, 2015Date of Patent: March 9, 2021Assignee: Dolby Laboratories Licensing CorporationInventors: Ning Xu, James E. Crenshaw, Scott Daly, Samir N. Hulyalkar, Raymond Yeung
-
Publication number: 20200043125Abstract: An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.Type: ApplicationFiled: October 8, 2019Publication date: February 6, 2020Applicant: Dolby Laboratories Licensing CorporationInventors: Raymond YEUNG, Patrick GRIFFIS, Thaddeus BEIER, Robin ATKINS
-
Patent number: 10553255Abstract: Methods and systems for generating and applying scene-stable metadata for a video data stream are disclosed herein. A video data stream is divided or partitioned into scenes and a first set of metadata may be generated for a given scene of video data. The first set of metadata may be any known metadata as a desired function of video content (e.g., luminance). The first set of metadata may be generated on a frame-by-frame basis. In one example, scene-stable metadata may be generated that may be different from the first set of metadata for the scene. The scene-stable metadata may be generated by monitoring a desired feature with the scene and may be used to keep the desired feature within an acceptable range of values. This may help to avoid noticeable and possibly objectionably visual artifacts upon rendering the video data.Type: GrantFiled: January 17, 2017Date of Patent: February 4, 2020Assignee: Dolby Laboratories Licensing CorporationInventors: Robin Atkins, Raymond Yeung, Sheng Qu
-
Patent number: 10510134Abstract: An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.Type: GrantFiled: January 25, 2018Date of Patent: December 17, 2019Assignee: Dolby Laboratories Licensing CorporationInventors: Raymond Yeung, Patrick Griffis, Thaddeus Beier, Robin Atkins
-
Patent number: 10425192Abstract: BATS protocols may be utilized for high efficiency communication in networks with burst or dependent type losses. Systematic recoding at intermediate network nodes may be utilized to reduce the computational cost during recoding. A block interleaver based BATS protocol may be utilized to handle burst loss, where batches are recoded to a same number of packets. Adaptive recoding may be utilized to improve the throughput, where a batch with a higher rank is recoded to a larger number of packets. Using adaptive recoding, a non-block interleaver based BATS protocol may be utilized.Type: GrantFiled: September 30, 2015Date of Patent: September 24, 2019Assignee: The Chinese University of Hong KongInventors: Ho Fai Hoover Yin, Shenghao Yang, Wai-Ho Raymond Yeung
-
Publication number: 20180160089Abstract: An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.Type: ApplicationFiled: January 25, 2018Publication date: June 7, 2018Applicant: Dolby Laboratories Licensing CorporationInventors: Raymond YEUNG, Patrick GRIFFIS, Thaddeus BEIER, Robin ATKINS
-
Patent number: 9916638Abstract: An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.Type: GrantFiled: May 2, 2017Date of Patent: March 13, 2018Assignee: Dolby Laboratories Licensing CorporationInventors: Raymond Yeung, Patrick Griffis, Thaddeus Beier, Robin Atkins
-
Publication number: 20180025464Abstract: An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.Type: ApplicationFiled: May 2, 2017Publication date: January 25, 2018Applicant: Dolby Laboratories Licensing CorporationInventors: Raymond YEUNG, Patrick GRIFFIS, Thaddeus BEIER, Robin ATKINS
-
Publication number: 20170264861Abstract: Methods and systems for controlling judder are disclosed. Judder can be introduced locally within a picture, to restore a judder feeling which is normally expected in films. Judder metadata can be generated based on the input frames. The judder metadata includes base frame rate, judder control rate and display parameters, and can be used to control judder for different applications.Type: ApplicationFiled: September 29, 2015Publication date: September 14, 2017Applicant: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Ning XU, James E. CRENSHAW, Scott DALY, Samir N. HULYALKAR, Raymond YEUNG
-
Publication number: 20170125063Abstract: Methods and systems for generating and applying scene-stable metadata for a video data stream are disclosed herein. A video data stream is divided or partitioned into scenes and a first set of metadata may be generated for a given scene of video data. The first set of metadata may be any known metadata as a desired function of video content (e.g., luminance). The first set of metadata may be generated on a frame-by-frame basis. In one example, scene-stable metadata may be generated that may be different from the first set of metadata for the scene. The scene-stable metadata may be generated by monitoring a desired feature with the scene and may be used to keep the desired feature within an acceptable range of values. This may help to avoid noticeable and possibly objectionably visual artifacts upon rendering the video data.Type: ApplicationFiled: January 17, 2017Publication date: May 4, 2017Applicant: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Robin ATKINS, Raymond YEUNG, Sheng QU
-
Publication number: 20170093666Abstract: BATS protocols may be utilized for high efficiency communication in networks with burst or dependent type losses. Systematic recoding at intermediate network nodes may be utilized to reduce the computational cost during recoding. A block interleaver based BATS protocol may be utilized to handle burst loss, where batches are recoded to a same number of packets. Adaptive recoding may be utilized to improve the throughput, where a batch with a higher rank is recoded to a larger number of packets. Using adaptive recoding, a non-block interleaver based BATS protocol may be utilized.Type: ApplicationFiled: September 30, 2015Publication date: March 30, 2017Inventors: HO FAI HOOVER YIN, Shenghao Yang, Wai-Ho Raymond Yeung
-
Patent number: 9607658Abstract: Methods and systems for generating and applying scene-stable metadata for a video data stream are disclosed herein. A video data stream is divided or partitioned into scenes and a first set of metadata may be generated for a given scene of video data. The first set of metadata may be any known metadata as a desired function of video content (e.g., luminance). The first set of metadata may be generated on a frame-by-frame basis. In one example, scene-stable metadata may be generated that may be different from the first set of metadata for the scene. The scene-stable metadata may be generated by monitoring a desired feature with the scene and may be used to keep the desired feature within an acceptable range of values. This may help to avoid noticeable and possibly objectionably visual artifacts upon rendering the video data.Type: GrantFiled: July 28, 2014Date of Patent: March 28, 2017Assignee: Dolby Laboratories Licensing CorporationInventors: Robin Atkins, Raymond Yeung, Sheng Qu
-
Publication number: 20160254028Abstract: Methods and systems for generating and applying scene-stable metadata for a video data stream are disclosed herein. A video data stream is divided or partitioned into scenes and a first set of metadata may be generated for a given scene of video data. The first set of metadata may be any known metadata as a desired function of video content (e.g., luminance). The first set of metadata may be generated on a frame-by-frame basis. In one example, scene-stable metadata may be generated that may be different from the first set of metadata for the scene. The scene-stable metadata may be generated by monitoring a desired feature with the scene and may be used to keep the desired feature within an acceptable range of values. This may help to avoid noticeable and possibly objectionably visual artifacts upon rendering the video data.Type: ApplicationFiled: July 28, 2014Publication date: September 1, 2016Applicant: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Robin ATKINS, Raymond YEUNG, Sheng QU
-
Patent number: 9313470Abstract: A method and system for determining correction information for the universal display of an original video sequence on a plurality of displays includes correcting an original video sequence using a first display, storing correction information related to the correction of the original video sequence using the first display, correcting the original video sequence using a different display, and determining and storing the differences between the correction information related to the correction of the original video sequence using the first display and using a particular, different display. Subsequently and prior to the display of the original video sequence, the original video sequence is corrected using a combination of the stored correction information related to the correction of the original video sequence using the first display and the respective determined differences, if any, related to a particular, different display on which the original video sequence is now to be displayed.Type: GrantFiled: October 28, 2005Date of Patent: April 12, 2016Assignee: Thomson LicensingInventors: Pierre Ollivier, Joachim Zell, Raymond Yeung
-
Patent number: 8532462Abstract: A method, system, apparatus, article of manufacture, and computer program product provide the ability to non-destructively generate a file based master. A domestic source (having domestic audio and video content) with textless content (have portions of the domestic source that is devoid of text) is obtained. A localized source (e.g., localized audio-video) based on the domestic source is received. The localized video is compared to the domestic source to determine differences. The localized video is bladed and realigned with the domestic source. Metadata (of the differences) is transposed onto the domestic source. Texted portions in the domestic source are obscured with corresponding portions of the textless content. Texted material (based on the localized video and texted portions) is created. The localized video content and the textless content are discarded.Type: GrantFiled: November 16, 2010Date of Patent: September 10, 2013Assignee: Twentieth Century Fox Film CorporationInventors: Arjun Ramamurthy, Geoffrey A. Bloder, Raymond Yeung
-
Publication number: 20110116764Abstract: A method, system, apparatus, article of manufacture, and computer program product provide the ability to non-destructively generate a file based master. A domestic source (having domestic audio and video content) with textless content (have portions of the domestic source that is devoid of text) is obtained. A localized source (e.g., localized audio-video) based on the domestic source is received. The localized video is compared to the domestic source to determine differences. The localized video is bladed and realigned with the domestic source. Metadata (of the differences) is transposed onto the domestic source. Texted portions in the domestic source are obscured with corresponding portions of the textless content. Texted material (based on the localized video and texted portions) is created. The localized video content and the textless content are discarded.Type: ApplicationFiled: November 16, 2010Publication date: May 19, 2011Applicant: Twentieth Century Fox Film CorporationInventors: Arjun Ramamurthy, Geoffrey A. Bloder, Raymond Yeung
-
Patent number: 7869662Abstract: A location system and a location system on a chip (LCoS) and method are described.Type: GrantFiled: July 17, 2007Date of Patent: January 11, 2011Assignee: Agilent Technologies, Inc.Inventors: Janet Yun, David C. Chu, Matthew D. Tenuta, Raymond Yeung, Nhan T. Nguyen
-
Publication number: 20090109344Abstract: A method and system for determining correction information for the universal display of an original video sequence on a plurality of displays includes correcting an original video sequence using a first display, storing correction information related to the correction of the original video sequence using the first display, correcting the original video sequence using a different display, and determining and storing the differences between the correction information related to the correction of the original video sequence using the first display and using a particular, different display. Subsequently and prior to the display of the original video sequence, the original video sequence is corrected using a combination of the stored correction information related to the correction of the original video sequence using the first display and the respective determined differences, if any, related to a particular, different display on which the original video sequence is now to be displayed.Type: ApplicationFiled: October 28, 2005Publication date: April 30, 2009Inventors: Pierre Ollivier, Joachim Zell, Raymond Yeung