Patents by Inventor Lijuan Wang

Lijuan Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8895509
    Abstract: The present invention provides a method of treating an ovarian cancer, the method comprising delivering one or more miR-200 family members to a mammalian subject in need thereof in an amount effective to treat the ovarian cancer. Also provided are methods of preventing metastasis of an ovarian cancer, the method comprising delivering one or more miR-200 family members to a mammalian subject in need thereof in an amount effective to prevent metastasis. Further provided are methods of sensitizing an ovarian cancer to a cytotoxic therapy, the method comprising delivering one or more miR-200 family members to a mammalian subject in need thereof in an amount effective to sensitize the ovarian cancer to the cytotoxic therapy. The invention also contemplates methods of reducing epithelial-to-mesenchymal transition (EMT) in an ovarian cancer or cancer cell as well as methods of inducing mesenchymal-to-epithelial transition (MET).
    Type: Grant
    Filed: November 23, 2011
    Date of Patent: November 25, 2014
    Assignee: Georgia Tech Research Corporation
    Inventors: John McDonald, Nathan John Bowen, LiJuan Wang
  • Patent number: 8751228
    Abstract: Embodiments of an audio-to-video engine are disclosed. In operation, the audio-to-video engine generates facial movement (e.g., a virtual talking head) based on an input speech. The audio-to-video engine receives the input speech and recognizes the input speech as a source feature vector. The audio-to-video engine then determines a Maximum A Posterior (MAP) mixture sequence based on the source feature vector. The MAP mixture sequence may be a function of a refined Gaussian Mixture Model (GMM). The audio-to-video engine may then use the MAP to estimate video feature parameters. The video feature parameters are then interpreted as facial movement. The facial movement may be stored as data to a storage module and/or it may be displayed as video to a display device.
    Type: Grant
    Filed: November 4, 2010
    Date of Patent: June 10, 2014
    Assignee: Microsoft Corporation
    Inventors: Lijuan Wang, Frank Kao-Ping Soong
  • Publication number: 20140025381
    Abstract: Instead of relying on humans to subjectively evaluate speech intelligibility of a subject, a system objectively evaluates the speech intelligibility. The system receives speech input and calculates confidence scores at multiple different levels using a Template Constrained Generalized Posterior Probability algorithm. One or multiple intelligibility classifiers are utilized to classify the desired entities on an intelligibility scale. A specific intelligibility classifier utilizes features such as the various confidence scores. The scale of the intelligibility classification can be adjusted to suit the application scenario. Based on the confidence score distributions and the intelligibility classification results at multiple levels an overall objective intelligibility score is calculated. The objective intelligibility scores can be used to rank different subjects or systems being assessed according to their intelligibility levels. The speech that is below a predetermined intelligibility (e.g.
    Type: Application
    Filed: July 20, 2012
    Publication date: January 23, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Linfang Wang, Yan Teng, Lijuan Wang, Frank Kao-Ping Soong, Zhe Geng, William Brad Waller, Mark Tillman Hanson
  • Patent number: 8583438
    Abstract: Described is a technology by which synthesized speech generated from text is evaluated against a prosody model (trained offline) to determine whether the speech will sound unnatural. If so, the speech is regenerated with modified data. The evaluation and regeneration may be iterative until deemed natural sounding. For example, text is built into a lattice that is then (e.g., Viterbi) searched to find a best path. The sections (e.g., units) of data on the path are evaluated via a prosody model. If the evaluation deems a section to correspond to unnatural prosody, that section is replaced, e.g., by modifying/pruning the lattice and re-performing the search. Replacement may be iterative until all sections pass the evaluation. Unnatural prosody detection may be biased such that during evaluation, unnatural prosody is falsely detected at a higher rate relative to a rate at which unnatural prosody is missed.
    Type: Grant
    Filed: September 20, 2007
    Date of Patent: November 12, 2013
    Assignee: Microsoft Corporation
    Inventors: Yong Zhao, Frank Kao-ping Soong, Min Chu, Lijuan Wang
  • Publication number: 20130243876
    Abstract: The present invention provides a method of treating an ovarian cancer, the method comprising delivering one or more miR-200 family members to a mammalian subject in need thereof in an amount effective to treat the ovarian cancer. Also provided are methods of preventing metastasis of an ovarian cancer, the method comprising delivering one or more miR-200 family members to a mammalian subject in need thereof in an amount effective to prevent metastasis. Further provided are methods of sensitizing an ovarian cancer to a cytotoxic therapy, the method comprising delivering one or more miR-200 family members to a mammalian subject in need thereof in an amount effective to sensitize the ovarian cancer to the cytotoxic therapy. The invention also contemplates methods of reducing epithelial-to-mesenchymal transition (EMT) in an ovarian cancer or cancer cell as well as methods of inducing mesenchymal-to-epithelial transition (MET).
    Type: Application
    Filed: November 23, 2011
    Publication date: September 19, 2013
    Applicant: Georgia Tech Research Corporation
    Inventors: John McDonald, Nathan John Bowen, LiJuan Wang
  • Publication number: 20120280974
    Abstract: Dynamic texture mapping is used to create a photorealistic three dimensional animation of an individual with facial features synchronized with desired speech. Audiovisual data of an individual reading a known script is obtained and stored in an audio library and an image library. The audiovisual data is processed to extract feature vectors used to train a statistical model. An input audio feature vector corresponding to desired speech with which the animation will be synchronized is provided. The statistical model is used to generate a trajectory of visual feature vectors that corresponds to the input audio feature vector. These visual feature vectors are used to identify a matching image sequence from the image library. The resulting sequence of images, concatenated from the image library, provides a photorealistic image sequence with facial features, such as lip movements, synchronized with the desired speech. This image sequence is applied to the three-dimensional model.
    Type: Application
    Filed: May 3, 2011
    Publication date: November 8, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Lijuan Wang, Frank Soong, Qiang Huo, Zhengyou Zhang
  • Publication number: 20120284029
    Abstract: Audiovisual data of an individual reading a known script is obtained and stored in an audio library and an image library. The audiovisual data is processed to extract feature vectors used to train a statistical model. An input audio feature vector corresponding to desired speech with which a synthesized image sequence will be synchronized is provided. The statistical model is used to generate a trajectory of visual feature vectors that corresponds to the input audio feature vector. These visual feature vectors are used to identify a matching image sequence from the image library. The resulting sequence of images, concatenated from the image library, provides a photorealistic image sequence with lip movements synchronized with the desired speech.
    Type: Application
    Filed: May 2, 2011
    Publication date: November 8, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Lijuan Wang, Frank Soong
  • Publication number: 20120276504
    Abstract: A representation of a virtual language teacher assists in language learning. The virtual language teacher may appear as a “talking head” in a video that a student views to practice pronunciation of a foreign language. A system for generating a virtual language teacher receives input text. The system may generate a video showing the virtual language teacher as a talking head having a mouth that moves in synchronization with speech generated from the input text. The video of the virtual language teacher may then be presented to the student.
    Type: Application
    Filed: April 29, 2011
    Publication date: November 1, 2012
    Applicant: Microsoft Corporation
    Inventors: Gang Chen, Weijiang Xu, Lijuan Wang, Matthew Robert Scott, Frank Kao-Ping Soong, Hao Wei
  • Patent number: 8224652
    Abstract: An “Animation Synthesizer” uses trainable probabilistic models, such as Hidden Markov Models (HMM), Artificial Neural Networks (ANN), etc., to provide speech and text driven body animation synthesis. Probabilistic models are trained using synchronized motion and speech inputs (e.g., live or recorded audio/video feeds) at various speech levels, such as sentences, phrases, words, phonemes, sub-phonemes, etc., depending upon the available data, and the motion type or body part being modeled. The Animation Synthesizer then uses the trainable probabilistic model for selecting animation trajectories for one or more different body parts (e.g., face, head, hands, arms, etc.) based on an arbitrary text and/or speech input. These animation trajectories are then used to synthesize a sequence of animations for digital avatars, cartoon characters, computer generated anthropomorphic persons or creatures, actual motions for physical robots, etc.
    Type: Grant
    Filed: September 26, 2008
    Date of Patent: July 17, 2012
    Assignee: Microsoft Corporation
    Inventors: Lijuan Wang, Lei Ma, Frank Kao-Ping Soong
  • Publication number: 20120130717
    Abstract: Techniques for providing real-time animation for a personalized cartoon avatar are described. In one example, a process trains one or more animated models to provide a set of probabilistic motions of one or more upper body parts based on speech and motion data. The process links one or more predetermined phrases that represent emotional states to the one or more animated models. After creation of the models, the process receives real-time speech input. Next, the process identifies an emotional state to be expressed based on the one or more predetermined phrases matching in context to the real-time speech input. The process then generates an animated sequence of motions of the one or more upper body parts by applying the one or more animated models in response to the real-time speech input.
    Type: Application
    Filed: November 19, 2010
    Publication date: May 24, 2012
    Applicant: Microsoft Corporation
    Inventors: Ning Xu, Lijuan Wang, Frank Kao-Ping Soong, Xiao Liang, Qi Luo, Ying-Qing Xu, Xin Zou
  • Publication number: 20120116761
    Abstract: Embodiments of an audio-to-video engine are disclosed. In operation, the audio-to-video engine generates facial movement (e.g., a virtual talking head) based on an input speech. The audio-to-video engine receives the input speech and recognizes the input speech as a source feature vector. The audio-to-video engine then determines a Maximum A Posterior (MAP) mixture sequence based on the source feature vector. The MAP mixture sequence may be a function of a refined Gaussian Mixture Model (GMM). The audio-to-video engine may then use the MAP to estimate video feature parameters. The video feature parameters are then interpreted as facial movement. The facial movement may be stored as data to a storage module and/or it may be displayed as video to a display device.
    Type: Application
    Filed: November 4, 2010
    Publication date: May 10, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Lijuan Wang, Frank Kao-Ping Soong
  • Patent number: 7700658
    Abstract: The invention relates to substituted aryl and heteroaryl (R)-Chiral Halogenated 1-Substitutedamino-(n+1)-Alkanol compounds useful as inhibitors of cholesteryl ester transfer protein (CETP; plasma lipid transfer protein-I) and compounds, compositions and methods for treating atherosclerosis and other coronary artery diseases. Novel high yield, stereoselective processes for the preparation of the chiral substituted alkanol compounds from chiral and achiral intermediates are described.
    Type: Grant
    Filed: January 29, 2008
    Date of Patent: April 20, 2010
    Assignee: Pfizer Inc.
    Inventors: James A. Sikorsky, Richard C. Durley, Margaret L. Grapperhaus, Mark A. Massa, Emily J. Reinhard, Yvette M. Fobian, Michael B. Tollefson, Lijuan Wang, Brian S. Hickory, Monica B. Norton, William F. Vernier, Deborah A. Mischke, Michele A. Promo, Ashton T. Hamme, Dale P. Spangler, Melvin L. Rueppel
  • Publication number: 20100082345
    Abstract: An “Animation Synthesizer” uses trainable probabilistic models, such as Hidden Markov Models (HMM), Artificial Neural Networks (ANN), etc., to provide speech and text driven body animation synthesis. Probabilistic models are trained using synchronized motion and speech inputs (e.g., live or recorded audio/video feeds) at various speech levels, such as sentences, phrases, words, phonemes, sub-phonemes, etc., depending upon the available data, and the motion type or body part being modeled. The Animation Synthesizer then uses the trainable probabilistic model for selecting animation trajectories for one or more different body parts (e.g., face, head, hands, arms, etc.) based on an arbitrary text and/or speech input. These animation trajectories are then used to synthesize a sequence of animations for digital avatars, cartoon characters, computer generated anthropomorphic persons or creatures, actual motions for physical robots, etc.
    Type: Application
    Filed: September 26, 2008
    Publication date: April 1, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Lijuan Wang, Lei Ma, Frank Kao-Ping Soong
  • Publication number: 20090228273
    Abstract: A speech recognition result is displayed for review by a user. If it is incorrect, the user provides pen-based editing marks. An error type and location (within the speech recognition result) are identified based on the pen-based editing marks. An alternative result template is generated, and an N-best alternative list is also generated by applying the template to intermediate recognition results from an automatic speech recognizer. The N-best alternative list is output for use in correcting the speech recognition results.
    Type: Application
    Filed: March 5, 2008
    Publication date: September 10, 2009
    Applicant: MICROSOFT CORPORATION
    Inventors: Lijuan Wang, Frank Kao-Ping Soong
  • Publication number: 20090156685
    Abstract: The invention relates to substituted aryl and heteroaryl (R)-Chiral Halogenated 1-Substitutedamino-(n+1)-Alkanol compounds useful as inhibitors of cholesteryl ester transfer protein (CETP; plasma lipid transfer protein-I) and compounds, compositions and methods for treating atherosclerosis and other coronary artery diseases. Novel high yield, stereoselective processes for the preparation of the chiral substituted alkanol compounds from chiral and achiral intermediates are described.
    Type: Application
    Filed: January 29, 2008
    Publication date: June 18, 2009
    Inventors: James A. Sikorski, Richard C. Durley, Margaret L. Grapperhaus, Mark A. Massa, Emily J. Reinhard, Yvette M. Fobian, Michael B. Tollefson, Lijuan Wang, Brian S. Hickory, Monica B. Norton, William F. Vernier, Deborah A. Mischke, Michele A. Promo, Ashton T. Hamme, Dale P. Spangler, Melvin L. Rueppel
  • Publication number: 20090099847
    Abstract: Detailed herein is a technology which, among other things, reduces errors introduced in recording and transcription data. In one approach to this technology, a method of detecting audio transcription errors is utilized. This method includes selected a focus unit, and selecting a context template corresponding to the focus unit. A hypothesis set is then determined, with reference to the context template and the focus unit. A probability is calculated corresponding to the focus unit, across the hypothesis set.
    Type: Application
    Filed: October 10, 2007
    Publication date: April 16, 2009
    Applicant: Microsoft Corporation
    Inventors: Frank Soong, Lijuan Wang
  • Publication number: 20090083036
    Abstract: Described is a technology by which synthesized speech generated from text is evaluated against a prosody model (trained offline) to determine whether the speech will sound unnatural. If so, the speech is regenerated with modified data. The evaluation and regeneration may be iterative until deemed natural sounding. For example, text is built into a lattice that is then (e.g., Viterbi) searched to find a best path. The sections (e.g., units) of data on the path are evaluated via a prosody model. If the evaluation deems a section to correspond to unnatural prosody, that section is replaced, e.g., by modifying/pruning the lattice and re-performing the search. Replacement may be iterative until all sections pass the evaluation. Unnatural prosody detection may be biased such that during evaluation, unnatural prosody is falsely detected at a higher rate relative to a rate at which unnatural prosody is missed.
    Type: Application
    Filed: September 20, 2007
    Publication date: March 26, 2009
    Applicant: Microsoft Corporation
    Inventors: Yong Zhao, Frank Kao-ping Soong, Min Chu, Lijuan Wang
  • Patent number: 7496512
    Abstract: A method and apparatus are provided for refining segmental boundaries in speech waveforms. Contextual acoustic feature similarities are used as a basis for clustering adjacent phoneme speech units, where each adjacent pair phoneme speech units include a segmental boundary. A refining model is trained for each cluster and used to refine boundaries of contextual phoneme speech units forming the clusters.
    Type: Grant
    Filed: April 13, 2004
    Date of Patent: February 24, 2009
    Assignee: Microsoft Corporation
    Inventors: Yong Zhao, Min Chu, Jian-lai Zhou, Lijuan Wang
  • Publication number: 20070219274
    Abstract: The invention relates to substituted aryl and heteroaryl (R)-Chiral Halogenated 1-Substitutedamino-(n+1)-Alkanol compounds useful as inhibitors of cholesteryl ester transfer protein (CETP; plasma lipid transfer protein-I) and compounds, compositions and methods for treating atherosclerosis and other coronary artery diseases. Novel high yield, stereoselective processes for the preparation of the chiral substituted alkanol compounds from chiral and achiral intermediates are described.
    Type: Application
    Filed: March 29, 2007
    Publication date: September 20, 2007
    Inventors: James Sikorski, Richard Durley, Margaret Grapperhaus, Mark Massa, Emily Reinhard, Yvette Fobian, Michael Tollefson, Lijuan Wang, Brian Hickory, Monica Norton, William Vernier, Deborah Mischke, Michelle Promo, Ashton Hamme, Dale Spangler, Melvin Rueppel
  • Patent number: 7259266
    Abstract: The subject invention concerns methods and compounds that have utility in the treatment of a condition associated with cyclooxygenase-2 mediated disorders. Compounds of particular interest are benzopyrans and their analogs defined by formula 1 Wherein Z, X, R1, R2, R3, and R4 are as described in the specification.
    Type: Grant
    Filed: March 16, 2004
    Date of Patent: August 21, 2007
    Assignee: Pharmacia Corporation
    Inventors: Jeffry Carter, David Brown, Li Xing, Karl Aston, John Springer, Francis Koszyk, Steven Kramer, Renee Huff, Yi Yu, Bruce Hamper, Subo Laio, Angela Deprow, Teresa Fletcher, E. Ann Hallinan, James Kiefer, David Limburg, Lijuan Wang, Cindy Ludwig, John McCall, John Talley