Patents by Inventor David Simons
David Simons has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12290783Abstract: Gaseous carbon compounds can be converted to carbon neutral or negative products using biological processes to metabolise the gaseous carbon compounds or use thermochemical processes to convert gaseous carbon compounds to syngas followed by thermochemical or biological processes to produce products. The gaseous carbon compounds include a mixture of CO2 and CH4 from either a single source, or two or more different sources. Separate biological processes are incorporated to process different gaseous carbon compounds. A gaseous carbon compound produced as a by-product of one biological process can be used as at least part of the feedstock for another process. A renewable energy system can be provided to power equipment. A control system can be used to control flow of gaseous carbon compounds and reactants entering the entire carbon processing systems to provide mass balanced quantities of gaseous carbon compounds and reactants in each processing system and/or between processing systems.Type: GrantFiled: July 7, 2023Date of Patent: May 6, 2025Assignee: Woodside Energy Technologies Pty LtdInventors: Jitendra Achyut Joshi, Min Ao, Michael Edward Lev Massen-Hane, Qiqing Shen, Sui Boon Liaw, Alexander David Simons
-
Publication number: 20230347287Abstract: Gaseous carbon compounds can be converted to carbon neutral or negative products using biological processes to metabolise the gaseous carbon compounds or use thermochemical processes to convert gaseous carbon compounds to syngas followed by thermochemical or biological processes to produce products. The gaseous carbon compounds include a mixture of CO2 and CH4 from either a single source, or two or more different sources. Separate biological processes are incorporated to process different gaseous carbon compounds. A gaseous carbon compound produced as a by-product of one biological process can be used as at least part of the feedstock for another process. A renewable energy system can be provided to power equipment. A control system can be used to control flow of gaseous carbon compounds and reactants entering the entire carbon processing systems to provide mass balanced quantities of gaseous carbon compounds and reactants in each processing system and/or between processing systems.Type: ApplicationFiled: July 7, 2023Publication date: November 2, 2023Inventors: Jitendra Achyut JOSHI, Min AO, Michael Edward Lev MASSEN-HANE, Qiqing SHEN, Sui Boon LIAW, Alexander David SIMONS
-
Patent number: 11554044Abstract: Disclosed are implantable ocular drainage devices that include a reversible switch mechanism to control flow through the drainage device. The devices include a drainage tube and a reversible, bi-stable switch mechanism that includes first and second stationary magnets spaced apart from one another and a magnetic or ferromagnetic mobile element moveably disposed in a channel in the housing.Type: GrantFiled: September 11, 2018Date of Patent: January 17, 2023Assignee: OREGON HEALTH & SCIENCE UNIVERSITYInventors: David Simons, Robert Kinast
-
Patent number: 11211060Abstract: Disclosed systems and methods predict visemes from an audio sequence. In an example, a viseme-generation application accesses a first audio sequence that is mapped to a sequence of visemes. The first audio sequence has a first length and represents phonemes. The application adjusts a second length of a second audio sequence such that the second length equals the first length and represents the phonemes. The application adjusts the sequence of visemes to the second audio sequence such that phonemes in the second audio sequence correspond to the phonemes in the first audio sequence. The application trains a machine-learning model with the second audio sequence and the sequence of visemes. The machine-learning model predicts an additional sequence of visemes based on an additional sequence of audio.Type: GrantFiled: May 29, 2020Date of Patent: December 28, 2021Assignee: Adobe Inc.Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons
-
Patent number: 11017503Abstract: Techniques for atmospheric and solar correction of aerial images are described. An apparatus may comprise an atmospheric and solar component arranged for execution by a logic device and operative to correct solar and atmosphere artifacts from an aerial image. The atmospheric and solar component may comprise an image information component operative to generate an image record for each aerial image of a group of aerial images, the image record comprising statistical information and image context information for each aerial image, a filter generation component operative to generate an atmospheric filter and a solar filter from the statistical information and the image context information stored in the image records, and an image correction component operative to correct atmospheric and solar artifacts from the aerial image using the respective atmospheric filter and solar filter. Other embodiments are described and claimed.Type: GrantFiled: February 20, 2017Date of Patent: May 25, 2021Assignee: Microsoft Technology Licensing , LLCInventors: Ido Omer, Yuxiang Liu, Wolfgang Schickler, Robert Ledner, Leon Rosenshein, David Simons
-
Patent number: 10783691Abstract: Certain embodiments involve generating one or more of appearance guide and a positional guide and using one or more of the guides to synthesize a stylized image or animation. For example, a system obtains data indicating a target image and a style exemplar image. The system generates an appearance guide, a positional guide, or both from the target image and the style exemplar image. The system uses one or more of the guides to transfer a texture or style from the style exemplar image to the target image.Type: GrantFiled: November 12, 2019Date of Patent: September 22, 2020Assignees: ADOBE INC., CZECH TECHNICAL UNIVERSITY IN PRAGUEInventors: David Simons, Michal Lukac, Daniel Sykora, Elya Shechtman, Paul Asente, Jingwan Lu, Jakub Fiser, Ondrej Jamriska
-
Publication number: 20200294495Abstract: Disclosed systems and methods predict visemes from an audio sequence. In an example, a viseme-generation application accesses a first audio sequence that is mapped to a sequence of visemes. The first audio sequence has a first length and represents phonemes. The application adjusts a second length of a second audio sequence such that the second length equals the first length and represents the phonemes. The application adjusts the sequence of visemes to the second audio sequence such that phonemes in the second audio sequence correspond to the phonemes in the first audio sequence. The application trains a machine-learning model with the second audio sequence and the sequence of visemes. The machine-learning model predicts an additional sequence of visemes based on an additional sequence of audio.Type: ApplicationFiled: May 29, 2020Publication date: September 17, 2020Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons
-
Publication number: 20200276050Abstract: Disclosed are implantable ocular drainage devices that include a reversible switch mechanism to control flow through the drainage device. The devices include a drainage tube and a reversible, bi-stable switch mechanism that includes first and second stationary magnets spaced apart from one another and a magnetic or ferromagnetic mobile element moveably disposed in a channel in the housing.Type: ApplicationFiled: September 11, 2018Publication date: September 3, 2020Applicant: Oregon Health & Science UniversityInventors: David SIMONS, Robert KINAST
-
Patent number: 10699705Abstract: Disclosed systems and methods predict visemes from an audio sequence. A viseme-generation application accesses a first set of training data that includes a first audio sequence representing a sentence spoken by a first speaker and a sequence of visemes. Each viseme is mapped to a respective audio sample of the first audio sequence. The viseme-generation application creates a second set of training data adjusting a second audio sequence spoken by a second speaker speaking the sentence such that the second and first sequences have the same length and at least one phoneme occurs at the same time stamp in the first sequence and in the second sequence. The viseme-generation application maps the sequence of visemes to the second audio sequence and trains a viseme prediction model to predict a sequence of visemes from an audio sequence.Type: GrantFiled: June 22, 2018Date of Patent: June 30, 2020Assignee: Adobe Inc.Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons
-
Publication number: 20200082591Abstract: Certain embodiments involve generating one or more of appearance guide and a positional guide and using one or more of the guides to synthesize a stylized image or animation. For example, a system obtains data indicating a target image and a style exemplar image. The system generates an appearance guide, a positional guide, or both from the target image and the style exemplar image. The system uses one or more of the guides to transfer a texture or style from the style exemplar image to the target image.Type: ApplicationFiled: November 12, 2019Publication date: March 12, 2020Inventors: David Simons, Michal Lukac, Daniel Sykora, Elya Shechtman, Paul Asente, Jingwan Lu, Jakub Fiser, Ondrej Jamriska
-
Publication number: 20190392823Abstract: Disclosed systems and methods predict visemes from an audio sequence. A viseme-generation application accesses a first set of training data that includes a first audio sequence representing a sentence spoken by a first speaker and a sequence of visemes. Each viseme is mapped to a respective audio sample of the first audio sequence. The viseme-generation application creates a second set of training data adjusting a second audio sequence spoken by a second speaker speaking the sentence such that the second and first sequences have the same length and at least one phoneme occurs at the same time stamp in the first sequence and in the second sequence. The viseme-generation application maps the sequence of visemes to the second audio sequence and trains a viseme prediction model to predict a sequence of visemes from an audio sequence.Type: ApplicationFiled: June 22, 2018Publication date: December 26, 2019Inventors: Wilmot Li, Jovan Popovic, Deepali Aneja, David Simons
-
Patent number: 10504267Abstract: Certain embodiments involve generating an appearance guide, a segmentation guide, and a positional guide and using one or more of the guides to synthesize a stylized image or animation. For example, a system obtains data indicating a target and a style exemplar image and generates a segmentation guide for segmenting the target image and the style exemplar image and identifying a feature of the target image and a corresponding feature of the style exemplar image. The system generates a positional guide for determining positions of the target feature and style feature relative to a common grid system. The system generates an appearance guide for modifying intensity levels and contrast values in the target image based on the style exemplar image. The system uses one or more of the guides to transfer a texture of the style feature to the corresponding target feature.Type: GrantFiled: October 16, 2017Date of Patent: December 10, 2019Assignee: Adobe Inc.Inventors: David Simons, Michal Lukac, Daniel Sykora, Elya Shechtman, Paul Asente, Jingwan Lu, Jakub Fiser, Ondrej Jamriska
-
Patent number: 10489959Abstract: Certain embodiments involve automatically generating a layered animatable puppet using a content stream. For example, a system identifies various frames of a content stream that includes a character performing various gestures usable for generating a layered puppet. The system separates the various frames of the content stream into various individual layers. The system extracts a face of the character from the various individual layers and creates the layered puppet by combining the individual layers and using the face of the character. The system can output the layered puppet for animation to perform a gesture of the various gestures.Type: GrantFiled: October 17, 2017Date of Patent: November 26, 2019Assignee: Adobe Inc.Inventors: David Simons, Jakub Fiser
-
Patent number: 10467822Abstract: Embodiments involve reducing collision-based defects in motion-stylizations. For example, a device obtains facial landmark data from video data. The facial landmark data includes a first trajectory traveled by a first point tracking one or more facial features, and a second trajectory traveled by a second point tracking one or more facial features. The device applies a motion-stylization to the facial landmark data that causes a first change to one or more of the first trajectory and the second trajectory. The device also identifies a new collision between the first and second points that is introduced by the first change. The device applies a modified stylization to the facial landmark data that causes a second change to one or more of the first trajectory and the second trajectory. If the new collision is removed by the second change, the device outputs the facial landmark data with the modified stylization applied.Type: GrantFiled: February 20, 2018Date of Patent: November 5, 2019Assignee: Adobe Inc.Inventors: Rinat Abdrashitov, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shectman, Duygu Ceylan Aksit, David Simons
-
Publication number: 20190259214Abstract: Embodiments involve reducing collision-based defects in motion-stylizations. For example, a device obtains facial landmark data from video data. The facial landmark data includes a first trajectory traveled by a first point tracking one or more facial features, and a second trajectory traveled by a second point tracking one or more facial features. The device applies a motion-stylization to the facial landmark data that causes a first change to one or more of the first trajectory and the second trajectory. The device also identifies a new collision between the first and second points that is introduced by the first change. The device applies a modified stylization to the facial landmark data that causes a second change to one or more of the first trajectory and the second trajectory. If the new collision is removed by the second change, the device outputs the facial landmark data with the modified stylization applied.Type: ApplicationFiled: February 20, 2018Publication date: August 22, 2019Inventors: Rinat Abdrashitov, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shectman, Duygu Ceylan Aksit, David Simons
-
Publication number: 20180350123Abstract: Certain embodiments involve automatically generating a layered animatable puppet using a content stream. For example, a system identifies various frames of a content stream that includes a character performing various gestures usable for generating a layered puppet. The system separates the various frames of the content stream into various individual layers. The system extracts a face of the character from the various individual layers and creates the layered puppet by combining the individual layers and using the face of the character. The system can output the layered puppet for animation to perform a gesture of the various gestures.Type: ApplicationFiled: October 17, 2017Publication date: December 6, 2018Inventors: David Simons, Jakub Fiser
-
Publication number: 20180350030Abstract: Certain embodiments involve generating an appearance guide, a segmentation guide, and a positional guide and using one or more of the guides to synthesize a stylized image or animation. For example, a system obtains data indicating a target and a style exemplar image and generates a segmentation guide for segmenting the target image and the style exemplar image and identifying a feature of the target image and a corresponding feature of the style exemplar image. The system generates a positional guide for determining positions of the target feature and style feature relative to a common grid system. The system generates an appearance guide for modifying intensity levels and contrast values in the target image based on the style exemplar image. The system uses one or more of the guides to transfer a texture of the style feature to the corresponding target feature.Type: ApplicationFiled: October 16, 2017Publication date: December 6, 2018Inventors: David Simons, Michal Lukac, Daniel Sykora, Elya Shechtman, Paul Asente, Jingwan Lu, Jakub Fiser, Ondrej Jamriska
-
Patent number: 10043309Abstract: An input mesh can be decomposed into component meshes that can be independently simplified. A computing device can calculate costs of performing candidate edge collapses for a component mesh. The candidate edge collapses can include boundary edge collapses and interior edge collapses. To simplify a component mesh, the execution of boundary edge collapses and the execution of interior edge collapses are interleaved in an order based on the costs of performing the candidate edge collapses. The position of a vertex resulting from a boundary edge collapse can be calculated independently of the interior of the component mesh. When component meshes are simplified in parallel, a boundary that is common to the component meshes can be simplified identically.Type: GrantFiled: December 14, 2015Date of Patent: August 7, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Philip Starhill, Christopher Messer, James Undery, Keith Seifert, David Simons, Maksim Lepikhin
-
Patent number: 9973744Abstract: One exemplary embodiment involves receiving a plurality of three-dimensional (3D) track points for a plurality of frames of a video, wherein the 3D track points are extracted from a plurality of two-dimensional source points. The embodiment further involves rendering the 3D track points across a plurality of frames of the video on a two-dimensional (2D) display. Additionally, the embodiment involves coloring each of the 3D track points wherein the color of each 3D track point visually distinguishes the 3D track point from a plurality of surrounding 3D track points, and wherein the color of each 3D track point is consistent across the frames of the video. The embodiment also involves sizing each of the 3D track points based on a distance between a camera that captured the video and a location of the 2D source points referenced by the respective one of the 3D track points.Type: GrantFiled: October 19, 2015Date of Patent: May 15, 2018Assignee: Adobe Systems IncorporatedInventors: James Acquavella, David Simons, Daniel M. Wilk
-
Patent number: 9956418Abstract: The disclosure provides a system that displays graphical representations of posture zones associated with posture states of a patient, on a display device communicatively coupled to a medical device. The medical device is configured to deliver therapy to the patient based on detected posture states of the patient, where the detected posture state is based on the posture zones. The display device may allow a user to manipulate the graphical representations of the posture zones, including changing the size of the posture zones. Additionally, the display device may allow a user to change transition times associated with transitions between posture states, and displaying an indication of the changed transition time by highlighting the two graphical representations of the posture zones corresponding to the posture states associated with the changed transition time.Type: GrantFiled: January 6, 2011Date of Patent: May 1, 2018Assignee: Medtronic, Inc.Inventors: Jon P. Davis, Rajeev M. Sahasrabudhe, David Simons