Patents Examined by Daquan Zhao
  • Patent number: 11929101
    Abstract: In order to provide a recording and reproducing device that allows a user to select and manage arbitrary play lists, a unit of management for managing all registered play list information and an upper management hierarchical level are added. The unit of management is adapted to be handled on the same level with unified information that indicates a reproduction range of all AV data. User-defined unified information is adapted to be handled on the added management hierarchical level. The user-defined unified information is formed to allow arbitrary reproduction ranges contained on a lower hierarchical level to be registered.
    Type: Grant
    Filed: September 11, 2023
    Date of Patent: March 12, 2024
    Assignee: Maxell, Ltd.
    Inventors: Susumu Yoshida, Junji Shiokawa, Hiroo Okamoto
  • Patent number: 11929102
    Abstract: A decoding system decodes a video stream, which is encoded video information. The decoding system includes a decoder that acquires the video steam and generates decoded video information, a maximum luminance information acquirer that acquires maximum luminance information indicating the maximum luminance of the video stream from the video stream, and an outputter that outputs the decoded video information along with the maximum luminance information. In in a case where the video stream includes a base video stream and an enhanced video stream, the decoder generates base video information by decoding the base video stream, an enhanced video information by decoding the enhanced video stream, and generates the decoded video information based on the base video information and the enhanced video information, and the outputter outputs the decoded video information, along with the maximum luminance information.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: March 12, 2024
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Hiroshi Yahata, Tadamasa Toma, Tomoki Ogawa
  • Patent number: 11924498
    Abstract: A facility for transferring configuration information to a target media device is described. The facility receives in the target media device a copy of media device settings stored in a source media device distinct from the target media device in a first form in which they are used in the source media device. This copy of media device settings is received by the target media device via a route other than its visual user interface. The facility causes the received copy of media device settings to be transformed into a second form in which they can be used in the target media device. The facility then stores the media receiver settings in the second form in the target media device for use by the target media device.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: March 5, 2024
    Assignee: DISH Network L.L.C.
    Inventors: Alan Terry Pattison, Robert Sadler
  • Patent number: 11922322
    Abstract: Aspects of the present disclosure enable humanly-specified relationships to contribute to a mapping that enables compression of the output structure of a machine-learned model. An exponential model such as a maximum entropy model can leverage a machine-learned embedding and the mapping to produce a classification output. In such fashion, the feature discovery capabilities of machine-learned models (e.g., deep networks) can be synergistically combined with relationships developed based on human understanding of the structural nature of the problem to be solved, thereby enabling compression of model output structures without significant loss of accuracy. These compressed models provide improved applicability to “on device” or other resource-constrained scenarios.
    Type: Grant
    Filed: January 30, 2023
    Date of Patent: March 5, 2024
    Assignee: GOOGLE LLC
    Inventors: Mitchel Weintraub, Ananda Theertha Suresh, Ehsan Variani
  • Patent number: 11922695
    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: March 5, 2024
    Assignee: Adobe Inc.
    Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
  • Patent number: 11917324
    Abstract: A method for detecting cheating by a first user engaging in an examination activity in an extended reality (XR) environment includes creating an audiovisual recording while the first user is engaged in the examination activity in the XR environment and displaying an interactive version of the recording to a second user to review for possible cheating by the first user during the examination activity.
    Type: Grant
    Filed: September 11, 2023
    Date of Patent: February 27, 2024
    Assignee: VR-EDU, Inc.
    Inventor: Ethan Fieldman
  • Patent number: 11912201
    Abstract: A vehicular exterior rearview mirror assembly includes a dual mode illumination module having a first light emitting diode (LED) operable to emit visible white light when electrically powered and a second LED operable to emit non-visible light when electrically powered. The first LED, when the vehicle is parked and the first LED is electrically powered, emits visible white light to provide visible ground illumination at a ground region at least partially along the side portion of the vehicle. Visible ground illumination by the vehicular exterior rearview mirror assembly at the ground region is locked-out during movement of the vehicle. The second LED, when the second LED is electrically powered, emits non-visible light to provide non-visible illumination for a camera viewing exterior and at least sideward of the vehicle. Non-visible light emission by the second LED is not locked-out during movement of the vehicle.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: February 27, 2024
    Assignee: Magna Mirrors of America, Inc.
    Inventors: Gregory A. Huizen, Eric Peterson
  • Patent number: 11908492
    Abstract: A decoding method of a video stream, which is encoded video information. The video stream includes a base video stream, and an enhanced video stream, which is a video stream to enhance luminance of the base video stream, and a describer that contains the combination information of the base video stream and the enhanced video stream. The decoding method includes acquiring the describer, and identifying the combination of the base video steam and the enhanced video stream. The decoding method also includes generating base video information by decoding the base video stream, generating enhanced video information by decoding the enhanced video stream, generating decoded video information based on the base video stream and the enhanced video stream, and outputting the decoded video information.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: February 20, 2024
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Hiroshi Yahata, Tadamasa Toma, Tomoki Ogawa
  • Patent number: 11900827
    Abstract: Methods, apparatus, systems, computing devices, computing entities, and/or the like for identifying one or more visual impairments of a user, mapping the visual impairments to one or more accessibility solutions, (e.g., program code entries) and dynamically modifying a display presentation based at least in part on the identified accessibility solutions.
    Type: Grant
    Filed: August 3, 2022
    Date of Patent: February 13, 2024
    Assignee: Optum Services (Ireland) Limited
    Inventors: Sarah Noreen O'Reilly, Michael Keane, Gareth M. Crossan, Patrick G. Mooney
  • Patent number: 11901429
    Abstract: Systems and methods are presented herein that facilitate temporally synchronizing, in real time, a separately sourced high quality audio segment of a live event with a video segment that is generated by a recording device associated with a member of the audience. An A-V Synchronization Application may synchronize a video segment of a live event that is generated from a personal electronic device of an audience member with a high quality audio segment that is separately sourced and generated by professional sound recording equipment at the live event. The result of the temporal synchronization is a high fidelity digital audio visual recording of the live event. In various, the audience member may stream, in real-time, the high fidelity digital audio visual recording to an additional electronic device at a different geo-location. In some examples, narrative audio segments may be also included as part of the high fidelity digital audio visual recording.
    Type: Grant
    Filed: August 17, 2022
    Date of Patent: February 13, 2024
    Assignee: BYGGE TECHNOLOGIES INC.
    Inventors: Neil C. Marck, Anthony Sharick
  • Patent number: 11890063
    Abstract: Systems and methods for creating images of an environment includes controlling at least one camera to acquire imaging data from the environment and selecting, from the imaging data, a three-dimensional-two-dimensional correspondence as a control point for use in a perspective-n-point problem to determine a position and orientation of the at least one camera from n known correspondences between three-dimensional object points and their two-dimensional image projections in the environment. The method also includes reprojecting at least a selected number of the projections, determining a reprojection error for each of the projections, and performing a weight assignment of reprojection errors to distinguish the inliers form outliers. These steps of the method are repeated to apply the weight assignment to outliers in a decreased fashion during iterations to reduce an impact of outliers in the real-time display of the environment.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: February 6, 2024
    Assignee: The Brigham and Women's Hospital, Inc.
    Inventors: Jayender Jagadeesan, Haoyin Zhou
  • Patent number: 11893054
    Abstract: A multimedia information processing method, apparatus, electronic device, and medium are provided. The method includes: receiving a user's selection operation for any piece of multimedia information among multimedia information to be processed, the multimedia information to be processed comprising at least two pieces of multimedia information, any piece of multimedia information being any piece of multimedia information except the last piece; determining a target multimedia information piece on the basis of a selected operation; upon receiving a trigger operation for the target multimedia information piece, determining a corresponding processing method; on the basis of the determined processing method, correspondingly processing the target multimedia information piece. Thus, the complexity of processing multimedia information is reduced and efficiency is improved, thereby improving user experience.
    Type: Grant
    Filed: May 2, 2022
    Date of Patent: February 6, 2024
    Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
    Inventors: Zhaoqin Lin, Wei Jiang, Qifan Zheng, Chen Shen
  • Patent number: 11893794
    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
  • Patent number: 11889183
    Abstract: Methods, systems, and devices are disclosed for using drone imaging to capture images. Capture instructions are provided to a drone device to aid in image capture related to events. Events may be defined by characteristics such as geographical boundary information, temporal boundary information, and participant information. In one aspect, capture instructions are determined based on subject faces appearing in images associated with a sharing pool associated with an event. In another aspect, capture instructions are determined based on factor-of-interest information and remuneration policy information. The factor-of-interest information identifies subjects of interest gathered from different user devices with corresponding weights In another aspect, a drone may be assigned to one of a plurality of events based on event opportunity scores. The event opportunity scores may be determined from users associated with events, and the factors and weights associated with those users.
    Type: Grant
    Filed: June 13, 2022
    Date of Patent: January 30, 2024
    Assignee: Ikorongo Technology, LLC
    Inventor: Hugh Blake Svendsen
  • Patent number: 11887371
    Abstract: Embodiments are directed to a thumbnail segmentation that defines the locations on a video timeline where thumbnails are displayed. Candidate thumbnail locations are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate thumbnail separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation (e.g., the width of a thumbnail) between consecutive thumbnail locations on a video timeline. The thumbnail segmentation is computed by solving a shortest path problem through a graph that models different thumbnail locations and separations. As such, a video timeline is displayed with thumbnails at locations on the timeline defined by the thumbnail segmentation, with each thumbnail depicting a portion of the video associated with the thumbnail location.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Patent number: 11887629
    Abstract: Embodiments are directed to interactive tiles that represent video segments of a segmentation of a video. In some embodiments, each interactive tile represents a different video segment from a particular video segmentation (e.g., a default video segmentation). Each interactive tile includes a thumbnail (e.g., the first frame of the video segment represented by the tile), some transcript from the beginning of the video segment, a visualization of detected faces in the video segment, and one or more faceted timelines that visualize a category of detected features (e.g., a visualization of detected visual scenes, audio classifications, visual artifacts). In some embodiments, interacting with a particular interactive tile navigates to a corresponding portion of the video, adds a corresponding video segment to a selection, and/or scrubs through tile thumbnails.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Patent number: 11887631
    Abstract: The present technology relates to an information processing device, an information processing method, and a program capable of providing video and sound in a synchronized state. An information processing device includes a determination unit that determines whether it is content in which sound is delayed with respect to video, and a processing unit that delays the video by a predetermined period and plays the video when the determination unit determines that it is the content in which the sound is delayed with respect to the video. The processing unit delays and plays the video so that the video when a sound source produces the sound is synchronized with the sound. The predetermined period corresponds to a period by which the sound is delayed. The present technology can be applied to an information processing device that processes video.
    Type: Grant
    Filed: October 29, 2020
    Date of Patent: January 30, 2024
    Assignee: SONY GROUP CORPORATION
    Inventors: Ryo Yokoyama, Takeshi Ogita
  • Patent number: 11882268
    Abstract: A head-up display system includes a three-dimensional display device, an optical member, and an accelerometer. The three-dimensional display device includes a display panel, an optical element, and a controller. The display panel displays an image. The optical element defines a traveling direction of image light emitted from the display panel. The optical member reflects the image light from the three-dimensional display device toward a user's eye. The optical member is at a fixed position relative to the three-dimensional display device. The accelerometer detects acceleration of the three-dimensional display device. The controller controls a position of the image on the display panel based on the acceleration.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: January 23, 2024
    Assignee: KYOCERA CORPORATION
    Inventors: Kaoru Kusafuka, Sunao Hashimoto
  • Patent number: 11875568
    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
  • Patent number: 11875484
    Abstract: The present disclosure relates to a method of removing, by an imaging processing device, remanence artifacts from an image (fn) of a sequence of images captured by an infrared imaging device, the method comprising: generating a remanence measure for at least some pixels in the image (fn) based on a difference between the pixels values of the image (fn) and the pixel values a previous image (fn?1) of the sequence; and removing remanence artifacts from at least some pixels of the image (fn) based on a remanence estimation for each of the at least some pixels, each remanence estimation being generated based on the remanence measure and on one or more previous remanence estimations of the at least some pixels and on a model of the exponential decay of the remanence.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: January 16, 2024
    Assignee: LYNRED
    Inventors: Alain Durand, Olivier Harant