Patents by Inventor Zhaowen Wang

Zhaowen Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12189413
    Abstract: Circuits and methods for multi-phase clock generators and phase interpolators are provided. The multi-phase clock generators include a delay line and multi-phase injection locked oscillator. At each stage of the multi-phase injection locked oscillator, injection currents are provided from a corresponding stage of the delay line. Outputs of the multi-phase injection locked oscillator and provided to mixers which produce inputs to an operational transconductance amplifier which provides feedback to the delay line and the multi-phase injection locked oscillator. The phase interpolator uses a technique of flipping certain input clock signals to reduce the number of components required while still being able to interpolate phase over 360 degrees and to reduce noise.
    Type: Grant
    Filed: February 4, 2022
    Date of Patent: January 7, 2025
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Zhaowen Wang, Yudong Zhang, Peter Kinget
  • Publication number: 20240419750
    Abstract: Digital content layout encoding techniques for search are described. In these techniques, a layout representation is generated (using machine learning automatically and without user intervention) that describes a layout of elements included within the digital content. In an implementation, the layout representation includes a description of both spatial and structural aspects of the elements in relation to each other. To do so, a two-pathway pipeline that is configured to model layout from both spatial and structural aspects using a spatial pathway, and a structural pathway, respectively. In one example, this is also performed through use of multi-level encoding and fusion to generate a layout representation.
    Type: Application
    Filed: September 2, 2024
    Publication date: December 19, 2024
    Applicant: Adobe Inc.
    Inventors: Zhaowen Wang, Yue Bai, John Philip Collomosse
  • Publication number: 20240404283
    Abstract: A method includes receiving a video input and a text transcription of the video input. The video input includes a plurality of frames and the text transcription includes a plurality of sentences. The method further includes determining, by a multimodal summarization model, a subset of key frames of the plurality of frames and a subset of key sentences of the plurality of sentences. The method further includes providing a summary of the video input and a summary of the text transcription based on the subset of key frames and the subset of key sentences.
    Type: Application
    Filed: June 2, 2023
    Publication date: December 5, 2024
    Applicant: Adobe Inc.
    Inventors: Zhaowen WANG, Trung BUI, Bo HE
  • Publication number: 20240405882
    Abstract: Some examples described herein provide for controlling output modulation amplitude for optoelectronic devices. In an example, a method includes transmitting a data pattern to an optical modulator device. The method also includes identifying, for each heater control value of a plurality of heater control values for a heater thermally coupled with the optical modulator device, an optical modulation amplitude corresponding to the heater control value based on a corresponding photodiode current value identified while transmitting the data pattern. The method also includes determining a maximum optical modulation amplitude for the optical modulator device based on a plurality of optical modulation amplitudes corresponding to the plurality of heater control values according to the identifying. The method also includes controlling the heater based at least in part on the determined maximum optical modulation amplitude that has been modified according to scaling maximum photodiode current values.
    Type: Application
    Filed: June 5, 2023
    Publication date: December 5, 2024
    Inventors: Adebabay M. BEKELE, Mayank RAJ, Chuan XIE, Sandeep KUMAR, Zhaowen WANG, Sukruth PATTANAGIRI GIRIYAPPA, Parag UPADHYAYA, Yohan FRANS
  • Publication number: 20240396638
    Abstract: Some examples described herein provide for controlling output modulation amplitude for optoelectronic devices. In an example, a method includes transmitting a first data pattern to an optical modulator device. The method also includes determining, while transmitting the first data pattern and for each heater control value of a plurality of heater control values for a heater, a photodiode current value associated with the optical modulator device to generate a plurality of photodiode current values corresponding to the plurality of heater control values. The method also includes determining a maximum optical modulation amplitude for the optical modulator device based at least in part on the plurality of photodiode current values corresponding to the plurality of heater control values. The method also includes controlling the heater for the optical modulator device based on the maximum optical modulation amplitude.
    Type: Application
    Filed: May 26, 2023
    Publication date: November 28, 2024
    Inventors: Adebabay M. BEKELE, Mayank RAJ, Chuan XIE, Sandeep KUMAR, Zhaowen WANG, Sukruth PATTANAGIRI GIRIYAPPA, Parag UPADHYAYA, Yohan FRANS
  • Publication number: 20240355119
    Abstract: One or more aspects of the method, apparatus, and non-transitory computer readable medium include receiving a query relating to a long video. One or more aspects of the method, apparatus, and non-transitory computer readable medium further include generating a segment of the long video corresponding to the query using a machine learning model trained to identify relevant segments from long videos. One or more aspects of the method, apparatus, and non-transitory computer readable medium further include responding to the query based on the generated segment.
    Type: Application
    Filed: April 24, 2023
    Publication date: October 24, 2024
    Inventors: Ioana Croitoru, Trung Huu Bui, Zhaowen Wang, Seunghyun Yoon, Franck Dernoncourt, Hailin Jin
  • Patent number: 12124439
    Abstract: Digital content search techniques are described that overcome the challenges found in conventional sequence-based techniques through use of a query-aware sequential search. In one example, a search query is received and sequence input data is obtained based on the search query. The sequence input data describes a sequence of digital content and respective search queries. Embedding data is generated based on the sequence input data using an embedding module of a machine-learning model. The embedding module includes a query-aware embedding layer that generates embeddings of the sequence of digital content and respective search queries. A search result is generated referencing at least one item of digital content by processing the embedding data using at least one layer of the machine-learning model.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: October 22, 2024
    Assignee: Adobe Inc.
    Inventors: Handong Zhao, Zhe Lin, Zhaowen Wang, Zhankui He, Ajinkya Gorakhnath Kale
  • Publication number: 20240328851
    Abstract: An integrated circuit (IC) device includes a controller circuitry having an input connected to a photodiode of an optoelectronic circuitry and an output connected to a biasing circuitry, the biasing circuitry having an input connected to the output of the controller circuitry, the controller circuitry configured to transmit a transimpedance control signal code to the biasing circuitry configured to cause the biasing circuitry to offset a DC current component of the output of the photodiode.
    Type: Application
    Filed: March 30, 2023
    Publication date: October 3, 2024
    Inventors: Zhaowen WANG, Mayank RAJ
  • Patent number: 12105767
    Abstract: Digital content layout encoding techniques for search are described. In these techniques, a layout representation is generated (using machine learning automatically and without user intervention) that describes a layout of elements included within the digital content. In an implementation, the layout representation includes a description of both spatial and structural aspects of the elements in relation to each other. To do so, a two-pathway pipeline that is configured to model layout from both spatial and structural aspects using a spatial pathway, and a structural pathway, respectively. In one example, this is also performed through use of multi-level encoding and fusion to generate a layout representation.
    Type: Grant
    Filed: May 3, 2022
    Date of Patent: October 1, 2024
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Yue Bai, John Philip Collomosse
  • Patent number: 12104949
    Abstract: An integrated circuit (IC) device includes a controller circuitry having an input connected to a photodiode of an optoelectronic circuitry and an output connected to a biasing circuitry, the biasing circuitry having an input connected to the output of the controller circuitry, the controller circuitry configured to transmit a transimpedance control signal code to the biasing circuitry configured to cause the biasing circuitry to offset a DC current component of the output of the photodiode.
    Type: Grant
    Filed: March 30, 2023
    Date of Patent: October 1, 2024
    Assignee: XILINX, INC.
    Inventors: Zhaowen Wang, Mayank Raj
  • Patent number: 12100076
    Abstract: Automatic font synthesis for modifying a local font to have an appearance that is visually similar to a source font is described. A font modification system receives an electronic document including the source font together with an indication of a font descriptor for the source font. The font descriptor includes information describing various font attributes for the source font, which define a visual appearance of the source font. Using the source font descriptor, the font modification system identifies a local font that is visually similar in appearance to the source font by comparing local font descriptors to the source font descriptor. A visually similar font is then synthesized by modifying glyph outlines of the local font to achieve the visual appearance defined by the source font descriptor. The synthesized font is then used to replace the source font and output in the electronic document at the computing device.
    Type: Grant
    Filed: June 13, 2023
    Date of Patent: September 24, 2024
    Assignee: Adobe Inc.
    Inventors: Nirmal Kumawat, Zhaowen Wang
  • Publication number: 20240303870
    Abstract: Systems and methods for generating representations for vector graphics are described. Embodiments are configured to obtain semantic information and geometric information for a vector graphics image. The semantic information describes individual segments of the vector graphics image, and the geometric information describes geometric relationships among the individual segments. Embodiments are additionally configured to encode the semantic information and the geometric information to obtain a vector graphics representation for the vector graphics image, and to provide a reconstructed image based on the vector graphics representation.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 12, 2024
    Inventors: Defu Cao, Zhaowen Wang, Jose Ignacio Echevarria Vallespi
  • Patent number: 12072239
    Abstract: An integrated circuit (IC) device includes a controller circuitry having an input coupled to a photodiode of an optoelectronic circuitry and an output coupled to a heater of the optoelectronic circuitry, the controller circuitry configured to determine a center frequency of the optoelectronic circuitry based on a shape of an input signal received from the photodiode, and provide a heater signal to the heater based on the shape of the input signal and the center frequency of the optoelectronic circuitry.
    Type: Grant
    Filed: March 30, 2023
    Date of Patent: August 27, 2024
    Assignee: XILINX, INC.
    Inventors: Zhaowen Wang, Mayank Raj, Chuan Xie, Sandeep Kumar, Muqseed Mohammad, Sukruth Pattanagiri Giriyappa, Stanley Y. Chen, Parag Upadhyaya, Yohan Frans
  • Patent number: 12056849
    Abstract: Embodiments are disclosed for translating an image from a source visual domain to a target visual domain. In particular, in one or more embodiments, the disclosed systems and methods comprise a training process that includes receiving a training input including a pair of keyframes and an unpaired image. The pair of keyframes represent a visual translation from a first version of an image in a source visual domain to a second version of the image in a target visual domain. The one or more embodiments further include sending the pair of keyframes and the unpaired image to an image translation network to generate a first training image and a second training image. The one or more embodiments further include training the image translation network to translate images from the source visual domain to the target visual domain based on a calculated loss using the first and second training images.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: August 6, 2024
    Assignees: Adobe Inc., CZECH TECHNICAL UNIVERSITY IN PRAGUE
    Inventors: Michal Lukác, Daniel Sýkora, David Futschik, Zhaowen Wang, Elya Shechtman
  • Patent number: 12019671
    Abstract: Digital content search techniques are described. In one example, the techniques are incorporated as part of a multi-head self-attention module of a transformer using machine learning. A localized self-attention module, for instance, is incorporated as part of the multi-head self-attention module that applies local constraints to the sequence. This is performable in a variety of ways. In a first instance, a model-based local encoder is used, examples of which include a fixed-depth recurrent neural network (RNN) and a convolutional network. In a second instance, a masking-based local encoder is used, examples of which include use of a fixed window, Gaussian initialization, and an adaptive predictor.
    Type: Grant
    Filed: October 14, 2021
    Date of Patent: June 25, 2024
    Assignee: Adobe Inc.
    Inventors: Handong Zhao, Zhankui He, Zhaowen Wang, Ajinkya Gorakhnath Kale, Zhe Lin
  • Patent number: 11977829
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly generating scalable and semantically editable font representations utilizing a machine learning approach. For example, the disclosed systems generate a font representation code from a glyph utilizing a particular neural network architecture. For example, the disclosed systems utilize a glyph appearance propagation model and perform an iterative process to generate a font representation code from an initial glyph. Additionally, using a glyph appearance propagation model, the disclosed systems automatically propagate the appearance of the initial glyph from the font representation code to generate additional glyphs corresponding to respective glyph labels. In some embodiments, the disclosed systems propagate edits or other changes in appearance of a glyph to other glyphs within a glyph set (e.g., to match the appearance of the edited glyph).
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: May 7, 2024
    Assignee: Adobe Inc.
    Inventors: Zhifei Zhang, Zhaowen Wang, Hailin Jin, Matthew Fisher
  • Patent number: 11886793
    Abstract: Embodiments of the technology described herein, are an intelligent system that aims to expedite a text design process by providing text design predictions interactively. The system works with a typical text design scenario comprising a background image and one or more text strings as input. In the design scenario, the text string is to be placed on top of the background. The textual design agent may include a location recommendation model that recommends a location on the background image to place the text. The textual design agent may also include a font recommendation model, a size recommendation model, and a color recommendation model. The output of these four models may be combined to generate draft designs that are evaluated as a whole (combination of color, font, and size) for the best designs. The top designs may be output to the user.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Saeid Motiian, Baldo Faieta, Zegi Gu, Peter Evan O'Donovan, Alex Filipkowski, Jose Ignacio Echevarria Vallespi
  • Patent number: 11875435
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for accurately and flexibly generating scalable fonts utilizing multi-implicit neural font representations. For instance, the disclosed systems combine deep learning with differentiable rasterization to generate a multi-implicit neural font representation of a glyph. For example, the disclosed systems utilize an implicit differentiable font neural network to determine a font style code for an input glyph as well as distance values for locations of the glyph to be rendered based on a glyph label and the font style code. Further, the disclosed systems rasterize the distance values utilizing a differentiable rasterization model and combines the rasterized distance values to generate a permutation-invariant version of the glyph corresponding glyph set.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Chinthala Pradyumna Reddy, Zhifei Zhang, Matthew Fisher, Hailin Jin, Zhaowen Wang, Niloy J Mitra
  • Publication number: 20230386208
    Abstract: Systems and methods for video segmentation and summarization are described. Embodiments of the present disclosure receive a video and a transcript of the video; generate visual features representing frames of the video using an image encoder; generate language features representing the transcript using a text encoder, wherein the image encoder and the text encoder are trained based on a correlation between training visual features and training language features; and segment the video into a plurality of video segments based on the visual features and the language features.
    Type: Application
    Filed: May 31, 2022
    Publication date: November 30, 2023
    Inventors: Hailin Jin, Jielin Qiu, Zhaowen Wang, Trung Huu Bui, Franck Dernoncourt
  • Patent number: 11823059
    Abstract: The present disclosure relates to a fashion recommendation system that employs a task-guided learning framework to jointly train a visually-aware personalized preference ranking network. In addition, the fashion recommendation system employs implicit feedback and generated user-based triplets to learn variances in the user's fashion preferences for items with which the user has not yet interacted. In particular, the fashion recommendation system uses triplets generated from implicit user data to jointly train a Siamese convolutional neural network and a personalized ranking model, which together produce a user preference predictor that determines personalized fashion recommendations for a user.
    Type: Grant
    Filed: July 15, 2021
    Date of Patent: November 21, 2023
    Assignees: Adobe Inc., The Regents of the University of California
    Inventors: Chen Fang, Zhaowen Wang, Wangcheng Kang, Julian McAuley