Patents by Inventor Raymond Yeung

Raymond Yeung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240172309
    Abstract: Systems and methods are provided for initiating first instructions to establish a communication session between a user device and a vehicle, wherein the first instructions include a wait time interval and the communication session is associated with a short-range wireless communication protocol. The system and methods may determine a number of unsuccessful attempts for establishing the communication session exceeds a threshold value, and in response to determining that the number exceeds the threshold value, generate second instructions to establish the communication session, wherein the second instructions include a modification to the wait time interval. The second instructions may be initiated to establish the communication session between the user device and the vehicle.
    Type: Application
    Filed: November 21, 2022
    Publication date: May 23, 2024
    Inventors: Fai Yeung, Leonid Kokhnovych, Zhenxiang Kui, Wei Kuai, Paul Rolfe, Raymond Chan
  • Patent number: 11010860
    Abstract: An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: May 18, 2021
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Raymond Yeung, Patrick Griffis, Thaddeus Beier, Robin Atkins
  • Patent number: 10944938
    Abstract: Methods and systems for controlling judder are disclosed. Judder can be introduced locally within a picture, to restore a judder feeling which is normally expected in films. Judder metadata can be generated based on the input frames. The judder metadata includes base frame rate, judder control rate and display parameters, and can be used to control judder for different applications.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: March 9, 2021
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Ning Xu, James E. Crenshaw, Scott Daly, Samir N. Hulyalkar, Raymond Yeung
  • Publication number: 20200043125
    Abstract: An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.
    Type: Application
    Filed: October 8, 2019
    Publication date: February 6, 2020
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Raymond YEUNG, Patrick GRIFFIS, Thaddeus BEIER, Robin ATKINS
  • Patent number: 10553255
    Abstract: Methods and systems for generating and applying scene-stable metadata for a video data stream are disclosed herein. A video data stream is divided or partitioned into scenes and a first set of metadata may be generated for a given scene of video data. The first set of metadata may be any known metadata as a desired function of video content (e.g., luminance). The first set of metadata may be generated on a frame-by-frame basis. In one example, scene-stable metadata may be generated that may be different from the first set of metadata for the scene. The scene-stable metadata may be generated by monitoring a desired feature with the scene and may be used to keep the desired feature within an acceptable range of values. This may help to avoid noticeable and possibly objectionably visual artifacts upon rendering the video data.
    Type: Grant
    Filed: January 17, 2017
    Date of Patent: February 4, 2020
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Robin Atkins, Raymond Yeung, Sheng Qu
  • Patent number: 10510134
    Abstract: An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.
    Type: Grant
    Filed: January 25, 2018
    Date of Patent: December 17, 2019
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Raymond Yeung, Patrick Griffis, Thaddeus Beier, Robin Atkins
  • Patent number: 10425192
    Abstract: BATS protocols may be utilized for high efficiency communication in networks with burst or dependent type losses. Systematic recoding at intermediate network nodes may be utilized to reduce the computational cost during recoding. A block interleaver based BATS protocol may be utilized to handle burst loss, where batches are recoded to a same number of packets. Adaptive recoding may be utilized to improve the throughput, where a batch with a higher rank is recoded to a larger number of packets. Using adaptive recoding, a non-block interleaver based BATS protocol may be utilized.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: September 24, 2019
    Assignee: The Chinese University of Hong Kong
    Inventors: Ho Fai Hoover Yin, Shenghao Yang, Wai-Ho Raymond Yeung
  • Publication number: 20180160089
    Abstract: An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.
    Type: Application
    Filed: January 25, 2018
    Publication date: June 7, 2018
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Raymond YEUNG, Patrick GRIFFIS, Thaddeus BEIER, Robin ATKINS
  • Patent number: 9916638
    Abstract: An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.
    Type: Grant
    Filed: May 2, 2017
    Date of Patent: March 13, 2018
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Raymond Yeung, Patrick Griffis, Thaddeus Beier, Robin Atkins
  • Publication number: 20180025464
    Abstract: An existing metadata set that is specific to a color volume transformation model is transformed to a metadata set that is specific to a distinctly different color volume transformation model. For example, source content metadata for a first color volume transformation model is received. This source metadata determines a specific color volume transformation, such as a sigmoidal tone map curve. The specific color volume transformation is mapped to a color volume transformation of a second color volume transformation model, e.g., a Bézier tone map curve. Mapping can be a best fit curve, or a reasonable approximation. Mapping results in metadata values used for the second color volume transformation model (e.g., one or more Bézier curve knee points and anchors). Thus, devices configured for the second color volume transformation model can reasonably render source content according to received source content metadata of the first color volume transformation model.
    Type: Application
    Filed: May 2, 2017
    Publication date: January 25, 2018
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Raymond YEUNG, Patrick GRIFFIS, Thaddeus BEIER, Robin ATKINS
  • Publication number: 20170264861
    Abstract: Methods and systems for controlling judder are disclosed. Judder can be introduced locally within a picture, to restore a judder feeling which is normally expected in films. Judder metadata can be generated based on the input frames. The judder metadata includes base frame rate, judder control rate and display parameters, and can be used to control judder for different applications.
    Type: Application
    Filed: September 29, 2015
    Publication date: September 14, 2017
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Ning XU, James E. CRENSHAW, Scott DALY, Samir N. HULYALKAR, Raymond YEUNG
  • Publication number: 20170125063
    Abstract: Methods and systems for generating and applying scene-stable metadata for a video data stream are disclosed herein. A video data stream is divided or partitioned into scenes and a first set of metadata may be generated for a given scene of video data. The first set of metadata may be any known metadata as a desired function of video content (e.g., luminance). The first set of metadata may be generated on a frame-by-frame basis. In one example, scene-stable metadata may be generated that may be different from the first set of metadata for the scene. The scene-stable metadata may be generated by monitoring a desired feature with the scene and may be used to keep the desired feature within an acceptable range of values. This may help to avoid noticeable and possibly objectionably visual artifacts upon rendering the video data.
    Type: Application
    Filed: January 17, 2017
    Publication date: May 4, 2017
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Robin ATKINS, Raymond YEUNG, Sheng QU
  • Publication number: 20170093666
    Abstract: BATS protocols may be utilized for high efficiency communication in networks with burst or dependent type losses. Systematic recoding at intermediate network nodes may be utilized to reduce the computational cost during recoding. A block interleaver based BATS protocol may be utilized to handle burst loss, where batches are recoded to a same number of packets. Adaptive recoding may be utilized to improve the throughput, where a batch with a higher rank is recoded to a larger number of packets. Using adaptive recoding, a non-block interleaver based BATS protocol may be utilized.
    Type: Application
    Filed: September 30, 2015
    Publication date: March 30, 2017
    Inventors: HO FAI HOOVER YIN, Shenghao Yang, Wai-Ho Raymond Yeung
  • Patent number: 9607658
    Abstract: Methods and systems for generating and applying scene-stable metadata for a video data stream are disclosed herein. A video data stream is divided or partitioned into scenes and a first set of metadata may be generated for a given scene of video data. The first set of metadata may be any known metadata as a desired function of video content (e.g., luminance). The first set of metadata may be generated on a frame-by-frame basis. In one example, scene-stable metadata may be generated that may be different from the first set of metadata for the scene. The scene-stable metadata may be generated by monitoring a desired feature with the scene and may be used to keep the desired feature within an acceptable range of values. This may help to avoid noticeable and possibly objectionably visual artifacts upon rendering the video data.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: March 28, 2017
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Robin Atkins, Raymond Yeung, Sheng Qu
  • Publication number: 20160254028
    Abstract: Methods and systems for generating and applying scene-stable metadata for a video data stream are disclosed herein. A video data stream is divided or partitioned into scenes and a first set of metadata may be generated for a given scene of video data. The first set of metadata may be any known metadata as a desired function of video content (e.g., luminance). The first set of metadata may be generated on a frame-by-frame basis. In one example, scene-stable metadata may be generated that may be different from the first set of metadata for the scene. The scene-stable metadata may be generated by monitoring a desired feature with the scene and may be used to keep the desired feature within an acceptable range of values. This may help to avoid noticeable and possibly objectionably visual artifacts upon rendering the video data.
    Type: Application
    Filed: July 28, 2014
    Publication date: September 1, 2016
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Robin ATKINS, Raymond YEUNG, Sheng QU
  • Patent number: 9313470
    Abstract: A method and system for determining correction information for the universal display of an original video sequence on a plurality of displays includes correcting an original video sequence using a first display, storing correction information related to the correction of the original video sequence using the first display, correcting the original video sequence using a different display, and determining and storing the differences between the correction information related to the correction of the original video sequence using the first display and using a particular, different display. Subsequently and prior to the display of the original video sequence, the original video sequence is corrected using a combination of the stored correction information related to the correction of the original video sequence using the first display and the respective determined differences, if any, related to a particular, different display on which the original video sequence is now to be displayed.
    Type: Grant
    Filed: October 28, 2005
    Date of Patent: April 12, 2016
    Assignee: Thomson Licensing
    Inventors: Pierre Ollivier, Joachim Zell, Raymond Yeung
  • Patent number: 8532462
    Abstract: A method, system, apparatus, article of manufacture, and computer program product provide the ability to non-destructively generate a file based master. A domestic source (having domestic audio and video content) with textless content (have portions of the domestic source that is devoid of text) is obtained. A localized source (e.g., localized audio-video) based on the domestic source is received. The localized video is compared to the domestic source to determine differences. The localized video is bladed and realigned with the domestic source. Metadata (of the differences) is transposed onto the domestic source. Texted portions in the domestic source are obscured with corresponding portions of the textless content. Texted material (based on the localized video and texted portions) is created. The localized video content and the textless content are discarded.
    Type: Grant
    Filed: November 16, 2010
    Date of Patent: September 10, 2013
    Assignee: Twentieth Century Fox Film Corporation
    Inventors: Arjun Ramamurthy, Geoffrey A. Bloder, Raymond Yeung
  • Publication number: 20110116764
    Abstract: A method, system, apparatus, article of manufacture, and computer program product provide the ability to non-destructively generate a file based master. A domestic source (having domestic audio and video content) with textless content (have portions of the domestic source that is devoid of text) is obtained. A localized source (e.g., localized audio-video) based on the domestic source is received. The localized video is compared to the domestic source to determine differences. The localized video is bladed and realigned with the domestic source. Metadata (of the differences) is transposed onto the domestic source. Texted portions in the domestic source are obscured with corresponding portions of the textless content. Texted material (based on the localized video and texted portions) is created. The localized video content and the textless content are discarded.
    Type: Application
    Filed: November 16, 2010
    Publication date: May 19, 2011
    Applicant: Twentieth Century Fox Film Corporation
    Inventors: Arjun Ramamurthy, Geoffrey A. Bloder, Raymond Yeung
  • Patent number: 7869662
    Abstract: A location system and a location system on a chip (LCoS) and method are described.
    Type: Grant
    Filed: July 17, 2007
    Date of Patent: January 11, 2011
    Assignee: Agilent Technologies, Inc.
    Inventors: Janet Yun, David C. Chu, Matthew D. Tenuta, Raymond Yeung, Nhan T. Nguyen
  • Publication number: 20090109344
    Abstract: A method and system for determining correction information for the universal display of an original video sequence on a plurality of displays includes correcting an original video sequence using a first display, storing correction information related to the correction of the original video sequence using the first display, correcting the original video sequence using a different display, and determining and storing the differences between the correction information related to the correction of the original video sequence using the first display and using a particular, different display. Subsequently and prior to the display of the original video sequence, the original video sequence is corrected using a combination of the stored correction information related to the correction of the original video sequence using the first display and the respective determined differences, if any, related to a particular, different display on which the original video sequence is now to be displayed.
    Type: Application
    Filed: October 28, 2005
    Publication date: April 30, 2009
    Inventors: Pierre Ollivier, Joachim Zell, Raymond Yeung