Patents by Inventor James Z. Wang

James Z. Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230363679
    Abstract: A system includes a mobile device for capturing raw video of a subject, a preprocessing system communicatively coupled to the mobile device for splitting the raw video into an image stream and an audio stream, an image processing system communicatively coupled to the preprocessing system for processing the image stream into a spatiotemporal facial frame sequence proposal, an audio processing system for processing the audio stream into a preprocessed audio component, one or more machine learning devices that analyze the facial frame sequence proposal and the preprocessed audio component according to a trained model to determine whether the subject is exhibiting signs of a neurological condition, and a user device for receiving data corresponding to a confirmed indication of neurological condition from the one or more machine learning devices and providing the confirmed indication of neurological condition to the subject and/or a clinician via a user interface.
    Type: Application
    Filed: September 17, 2021
    Publication date: November 16, 2023
    Applicants: THE PENN STATE RESEARCH FOUNDATION, THE METHODIST HOSPITAL
    Inventors: James Z. Wang, Mingli Yu, Tongan Cai, Xiaolei Huang, Kelvin Wong, John Volpi, Stephen T.C. Wong
  • Patent number: 11636368
    Abstract: A method of improving quality of crowdsourced affective data based on agreement relationship between a plurality of annotators include receiving, by a processor, a collection of stimuli previously given affective labels by the plurality of annotators, executing, by a processor, an algorithm operative to perform the steps including constructing an agreement multigraph as a probabilistic model including a pair-wise status of agreement between the affective labels given by different ones of the plurality of annotators, learning the probabilistic model computationally using the crowdsourced affective data, identifying a reliability of each of the plurality of annotators based on the learned model, and adjusting the crowdsourced affective data by calculating the affective labels of each stimuli based on the identified reliability of each of the plurality of annotators, thereby improving the quality of the crowdsourced affective data.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: April 25, 2023
    Assignee: THE PENN STATE RESEARCH FOUNDATION
    Inventors: Jianbo Ye, Jia Li, James Z. Wang
  • Patent number: 11244450
    Abstract: Systems and methods for completing a morphological characterization of an image of a placenta and providing suggested pathological diagnoses are disclosed. A system includes programming instructions that, when executed, cause processing devices to execute commands according to the following logic modules: an Encoder module that receives the digital image of the placenta and outputs a pyramid of feature maps, a SegDecoder module that segments the pyramid of feature maps on a fetal side image and on a maternal side image, a Classification Subnet module that classifies the fetal side image and the maternal side image, and a convolutional IPDecoder module that localizes an umbilical cord insertion point of the placenta from the classified fetal side image and the classified maternal side image. The localized umbilical cord insertion point, segmentation maps for the classified fetal side and maternal side images are provided to an external device for determining the morphological characterization.
    Type: Grant
    Filed: August 18, 2020
    Date of Patent: February 8, 2022
    Assignees: The Penn State Research Foundation, Northwestern University, Sinai Health System
    Inventors: Alison Gernand, James Z. Wang, Jeffery Goldstein, William Parks, Yukun Chen, Zhuomin Zhang, Dolzodmaa Davaasuren, Chenyan Wu
  • Publication number: 20210056691
    Abstract: Systems and methods for completing a morphological characterization of an image of a placenta and providing suggested pathological diagnoses are disclosed. A system includes programming instructions that, when executed, cause processing devices to execute commands according to the following logic modules: an Encoder module that receives the digital image of the placenta and outputs a pyramid of feature maps, a SegDecoder module that segments the pyramid of feature maps on a fetal side image and on a maternal side image, a Classification Subnet module that classifies the fetal side image and the maternal side image, and a convolutional IPDecoder module that localizes an umbilical cord insertion point of the placenta from the classified fetal side image and the classified maternal side image. The localized umbilical cord insertion point, segmentation maps for the classified fetal side and maternal side images are provided to an external device for determining the morphological characterization.
    Type: Application
    Filed: August 18, 2020
    Publication date: February 25, 2021
    Inventors: Alison Gernand, James Z. Wang, Jeffery Goldstein, William Parks, Yukun Chen, Zhuomin Zhang, Dolzodmaa Davaasuren, Chenyan Wu
  • Publication number: 20210000404
    Abstract: An emotion analysis and recognition system including an automated recognition of bodily expression of emotion (ARBEE) system is described. The system may include program instructions executable by a processor to: receive a plurality of body movement models, each body movement model generated based on a crowdsourced body language dataset, calculate at least one evaluation metric for each body movement model, select a highest ranked body movement model based on the at least one metric calculated for each body movement model, combine the highest ranked body movement model with at least one other body movement model of the plurality of body movement models, calculate at least one evaluation metric for each combination of body movement models, and determine a highest ranked combination of body movement models to predict a bodily expression of emotion.
    Type: Application
    Filed: July 1, 2020
    Publication date: January 7, 2021
    Applicant: THE PENN STATE RESEARCH FOUNDATION
    Inventors: James Z. Wang, Yu Luo, Jianbo Ye, Reginald B. Adams
  • Patent number: 10657651
    Abstract: Systems, methods, and computer-readable media for electronically assessing a visual significance of pixels or regions in an electronic image are disclosed. A method includes receiving the electronic image, performing a composition analysis on the electronic image, the composition analysis includes partitioning the electronic image into a plurality of segments or a plurality of parts, constructing an attributed composition graph having a plurality of nodes, where each node corresponds to a segment or a part and where each node includes one or more attributes, modeling the visual significance of the electronic image based on the attributed composition graph using a statistical modeling process or a computational modeling process to obtain a plurality of values, and constructing a composition significance map having a significance score for each segment or each part according to the values obtained from the statistical modeling process or the computational modeling process.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: May 19, 2020
    Assignee: THE PENN STATE RESEARCH FOUNDATION
    Inventors: Jia Li, James Z. Wang
  • Publication number: 20190114556
    Abstract: A method of improving quality of crowdsourced affective data based on agreement relationship between a plurality of annotators include receiving, by a processor, a collection of stimuli previously given affective labels by the plurality of annotators, executing, by a processor, an algorithm operative to perform the steps including constructing an agreement multigraph as a probabilistic model including a pair-wise status of agreement between the affective labels given by different ones of the plurality of annotators, learning the probabilistic model computationally using the crowdsourced affective data, identifying a reliability of each of the plurality of annotators based on the learned model, and adjusting the crowdsourced affective data by calculating the affective labels of each stimuli based on the identified reliability of each of the plurality of annotators, thereby improving the quality of the crowdsourced affective data.
    Type: Application
    Filed: January 4, 2018
    Publication date: April 18, 2019
    Inventors: Jianbo Ye, Jia Li, James Z. Wang
  • Publication number: 20190114780
    Abstract: Systems, methods, and computer-readable media for electronically assessing a visual significance of pixels or regions in an electronic image are disclosed. A method includes receiving the electronic image, performing a composition analysis on the electronic image, the composition analysis includes partitioning the electronic image into a plurality of segments or a plurality of parts, constructing an attributed composition graph having a plurality of nodes, where each node corresponds to a segment or a part and where each node includes one or more attributes, modeling the visual significance of the electronic image based on the attributed composition graph using a statistical modeling process or a computational modeling process to obtain a plurality of values, and constructing a composition significance map having a significance score for each segment or each part according to the values obtained from the statistical modeling process or the computational modeling process.
    Type: Application
    Filed: December 7, 2018
    Publication date: April 18, 2019
    Inventors: Jia Li, James Z. Wang
  • Patent number: 10186040
    Abstract: Systems, methods, and computer-readable media for electronically assessing a visual significance of pixels or regions in an electronic image are disclosed. A method includes receiving the electronic image, performing a composition analysis on the electronic image, the composition analysis includes partitioning the electronic image into a plurality of segments or a plurality of parts, constructing an attributed composition graph having a plurality of nodes, where each node corresponds to a segment or a part and where each node includes one or more attributes, modeling the visual significance of the electronic image based on the attributed composition graph using a statistical modeling process or a computational modeling process to obtain a plurality of values, and constructing a composition significance map having a significance score for each segment or each part according to the values obtained from the statistical modeling process or the computational modeling process.
    Type: Grant
    Filed: June 8, 2017
    Date of Patent: January 22, 2019
    Assignee: THE PENN STATE RESEARCH FOUNDATION
    Inventors: Jia Li, James Z. Wang
  • Patent number: 10043099
    Abstract: Shape features in natural images influence emotions aroused in human beings. An in-depth statistical analysis helps to understand the relationship between shapes and emotions. Through experimental results on the International Affective Picture System (IAPS) dataset, evidence is presented as to the significance of roundness-angularity and simplicity-complexity on predicting emotional content in images. Shape features are combined with other state-of-the-art features to show a gain in prediction and classification accuracy. Emotions are modeled from a dimensional perspective in order to predict valence and arousal ratings, which have advantages over modeling the traditional discrete emotional categories. Images are distinguished vis-a-vis strong emotional content from emotionally neutral images with high accuracy.
    Type: Grant
    Filed: January 11, 2018
    Date of Patent: August 7, 2018
    Assignee: The Penn State Research Foundation
    Inventors: James Z. Wang, Xin Lu, Poonam Suryanarayan, Reginald B. Adams, Jia Li, Michelle Newman
  • Patent number: 10019658
    Abstract: Satellite images from vast historical archives are analyzed to predict severe storms. We extract and summarize important visual storm evidence from satellite image sequences in a way similar to how meteorologists interpret these images. The method extracts and fits local cloud motions from image sequences to model the storm-related cloud patches. Image data of an entire year are adopted to train the model. The historical storm reports since the year 2000 are used as the ground-truth and statistical priors in the modeling process. Experiments demonstrate the usefulness and potential of the algorithm for producing improved storm forecasts. A preferred method applies cloud motion estimation in image sequences. This aspect of the invention is important because it extracts and models certain patterns of cloud motion, in addition to capturing the cloud displacement.
    Type: Grant
    Filed: August 9, 2017
    Date of Patent: July 10, 2018
    Assignee: THE PENN STATE UNIVERSITY
    Inventors: James Z. Wang, Yu Zhang, Stephen Wistar, Michael A. Steinberg, Jia Li
  • Patent number: 10013477
    Abstract: Computationally efficient accelerated D2-clustering algorithms are disclosed for clustering discrete distributions under the Wasserstein distance with improved scalability. Three first-order methods include subgradient descent method with re-parametrization, alternating direction method of multipliers (ADMM), and a modified version of Bregman ADMM. The effects of the hyper-parameters on robustness, convergence, and speed of optimization are thoroughly examined. A parallel algorithm for the modified Bregman ADMM method is tested in a multi-core environment with adequate scaling efficiency subject to hundreds of CPUs, demonstrating the effectiveness of AD2-clustering.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: July 3, 2018
    Assignee: The Penn State Research Foundation
    Inventors: Jianbo Ye, Jia Li, James Z. Wang
  • Publication number: 20180150719
    Abstract: Shape features in natural images influence emotions aroused in human beings. An in-depth statistical analysis helps to understand the relationship between shapes and emotions. Through experimental results on the International Affective Picture System (IAPS) dataset, evidence is presented as to the significance of roundness-angularity and simplicity-complexity on predicting emotional content in images. Shape features are combined with other state-of-the-art features to show a gain in prediction and classification accuracy. Emotions are modeled from a dimensional perspective in order to predict valence and arousal ratings, which have advantages over modeling the traditional discrete emotional categories. Images are distinguished vis-a-vis strong emotional content from emotionally neutral images with high accuracy.
    Type: Application
    Filed: January 11, 2018
    Publication date: May 31, 2018
    Inventors: James Z. Wang, Xin Lu, Poonam Suryanarayan, Reginald B. Adams, Jia Li, Michelle Newman
  • Patent number: 9904869
    Abstract: Shape features in natural images influence emotions aroused in human beings. An in-depth statistical analysis helps to understand the relationship between shapes and emotions. Through experimental results on the International Affective Picture System (IAPS) dataset, evidence is presented as to the significance of roundness-angularity and simplicity-complexity on predicting emotional content in images. Shape features are combined with other state-of-the-art features to show a gain in prediction and classification accuracy. Emotions are modeled from a dimensional perspective in order to predict valence and arousal ratings, which have advantages over modeling the traditional discrete emotional categories. Images are distinguished vis-a-vis strong emotional content from emotionally neutral images with high accuracy.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: February 27, 2018
    Assignee: The Penn State Research Foundation
    Inventors: James Z. Wang, Xin Lu, Poonam Suryanarayan, Reginald B. Adams, Jr., Jia Li, Michelle Newman
  • Publication number: 20180018543
    Abstract: Satellite images from vast historical archives are analyzed to predict severe storms. We extract and summarize important visual storm evidence from satellite image sequences in a way similar to how meteorologists interpret these images. The method extracts and fits local cloud motions from image sequences to model the storm-related cloud patches. Image data of an entire year are adopted to train the model. The historical storm reports since the year 2000 are used as the ground-truth and statistical priors in the modeling process. Experiments demonstrate the usefulness and potential of the algorithm for producing improved storm forecasts. A preferred method applies cloud motion estimation in image sequences. This aspect of the invention is important because it extracts and models certain patterns of cloud motion, in addition to capturing the cloud displacement.
    Type: Application
    Filed: August 9, 2017
    Publication date: January 18, 2018
    Inventors: James Z. Wang, Yu Zhang, Stephen Wistar, Michael A. Steinberg, Jia Li
  • Publication number: 20170358090
    Abstract: Systems, methods, and computer-readable media for electronically assessing a visual significance of pixels or regions in an electronic image are disclosed. A method includes receiving the electronic image, performing a composition analysis on the electronic image, the composition analysis includes partitioning the electronic image into a plurality of segments or a plurality of parts, constructing an attributed composition graph having a plurality of nodes, where each node corresponds to a segment or a part and where each node includes one or more attributes, modeling the visual significance of the electronic image based on the attributed composition graph using a statistical modeling process or a computational modeling process to obtain a plurality of values, and constructing a composition significance map having a significance score for each segment or each part according to the values obtained from the statistical modeling process or the computational modeling process.
    Type: Application
    Filed: June 8, 2017
    Publication date: December 14, 2017
    Inventors: Jia Li, James Z. Wang
  • Patent number: 9760805
    Abstract: Satellite images from vast historical archives are analyzed to predict severe storms. We extract and summarize important visual storm evidence from satellite image sequences in a way similar to how meteorologists interpret these images. The method extracts and fits local cloud motions from image sequences to model the storm-related cloud patches. Image data of an entire year are adopted to train the model. The historical storm reports since the year 2000 are used as the ground-truth and statistical priors in the modeling process. Experiments demonstrate the usefulness and potential of the algorithm for producing improved storm forecasts. A preferred method applies cloud motion estimation in image sequences. This aspect of the invention is important because it extracts and models certain patterns of cloud motion, in addition to capturing the cloud displacement.
    Type: Grant
    Filed: October 9, 2015
    Date of Patent: September 12, 2017
    Assignee: The Penn State Research Foundation
    Inventors: James Z. Wang, Yu Zhang, Stephen Wistar, Michael A. Steinberg, Jia Li
  • Patent number: 9727802
    Abstract: An intelligent system detects triangles in digital photographic images, including portrait photography. The method extracts a set of filtered line segments as candidate triangle sides and/or objects as candidate triangle vertices. A modified RANSAC algorithm is utilized to fit triangles onto the set of line segments and/or vertices. Two metrics may then be used evaluate the fitted triangles. Those with high fitting scores are considered as detected triangles. The system can accurately locate preeminent triangles in photographs without any knowledge about the camera parameters or lens choices. The invention can also help amateurs gain a deeper understanding and inspirations from professional photographic works.
    Type: Grant
    Filed: October 23, 2015
    Date of Patent: August 8, 2017
    Assignee: The Penn State Research Foundation
    Inventors: James Z. Wang, Siqiong He
  • Patent number: 9720998
    Abstract: The trend of analyzing big data in artificial intelligence requires more scalable machine learning algorithms, among which clustering is a fundamental and arguably the most widely applied method. To extend the applications of regular vector-based clustering algorithms, the Discrete Distribution (D2) clustering algorithm has been developed for clustering bags of weighted vectors which are well adopted in many emerging machine learning applications. The high computational complexity of D2-clustering limits its impact in solving massive learning problems. Here we present a parallel D2-clustering algorithm with substantially improved scalability. We develop a hierarchical structure for parallel computing in order to achieve a balance between the individual-node computation and the integration process of the algorithm. The parallel algorithm achieves significant speed-up with minor accuracy loss.
    Type: Grant
    Filed: November 15, 2013
    Date of Patent: August 1, 2017
    Assignee: The Penn State Research Foundation
    Inventors: James Z. Wang, Yu Zhang, Jia Li
  • Patent number: 9646226
    Abstract: Automatic selection of training images is enhanced using an instance-weighted mixture modeling framework called ARTEMIS. An optimization algorithm is derived that in addition to mixture parameter estimation learns instance-weights, essentially adapting to the noise associated with each example. The mechanism of hypothetical local mapping is evoked so that data in diverse mathematical forms or modalities can be cohesively treated as the system maintains tractability in optimization. Training examples are selected from top-ranked images of a likelihood-based image ranking. Experiments indicate that ARTEMIS exhibits higher resilience to noise than several baselines for large training data collection. The performance of ARTEMIS-trained image annotation system is comparable to using manually curated datasets.
    Type: Grant
    Filed: April 16, 2014
    Date of Patent: May 9, 2017
    Assignee: The Penn State Research Foundation
    Inventors: James Z. Wang, Neela Sawant, Jia Li