Patents by Inventor Mark Sagar

Mark Sagar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230410399
    Abstract: Skeletal Animation is improved using an Actuation System for animating a Virtual Character or Digital Entity including a plurality of Joints associated with a Skeleton of the Virtual Character or Digital Entity and at least one Actuation Unit Descriptor defining a Skeletal Pose with respect to a first Skeletal Pose. The Actuation Unit Descriptors are represented using Rotation Parameters and one or more of the Joints of the Skeleton are driven using corresponding Actuation Unit Descriptors.
    Type: Application
    Filed: November 22, 2021
    Publication date: December 21, 2023
    Inventors: Mark Sagar, Jo Hutton, Tim Wu, Tiago Ribeiro, Pavel Sumetc
  • Publication number: 20230377238
    Abstract: Embodiments described herein relate to the autonomous animation of Gestures by the automatic application of animations to Input Text—or the automatic application of animation Mark-up wherein the Mark-up triggers nonverbal communication expressions or Gestures. In order for an Embodied Agent's movements to come across as natural and human-like as possible, a Text-To-Gesture Algorithm (TTG Algorithm) analyses Input Text of a Communicative Utterance before it is uttered by a Embodied Agent, and marks it up with appropriate and meaningful Gestures given the meaning, context, and emotional content of Input Text and the gesturing style or personality of the Embodied Agent.
    Type: Application
    Filed: May 18, 2023
    Publication date: November 23, 2023
    Inventors: Jo Hutton, Mark Sagar, Amy Wang, Hannah Clark-Younger, Kirstin Marcon, Paige Skinner, Shane Blakett, Teah Rota, Tim Szu-Hsien Wu, Utkarsh Saxena, Xueyuan Zhang, Hazel Watson-Smith, Travers Biddle, Emma Perry
  • Publication number: 20230334253
    Abstract: A computer implemented method for parsing a sensorimotor Event experienced by an Embodied Agent into symbolic fields of a WM event representation mapping to a sentence defining the Event is described the method including the steps of: attending a participant object; classifying the participant object; and making a series of cascading determinations about the Event, wherein some determinations are conditional on the results of previous determinations, wherein each determination sets a field in the WM event representation
    Type: Application
    Filed: September 24, 2021
    Publication date: October 19, 2023
    Inventors: Mark Sagar, Alistair Knott, Martin Takac
  • Publication number: 20230079478
    Abstract: Methods and systems describe providing face mesh deformation with detailed wrinkles. A neutral mesh based on a scan of a face is provided along with initial control point positions on the neutral mesh and user-defined control point positions corresponding to a non-neutral facial expression. A radial basis function (RBF) deformed mesh is generated based on RBF interpolation of the initial control point positions and the user-defined control point positions. Predicted wrinkle deformation data is then generated by one or more cascaded regressors networks. Finally, a final deformed mesh is provided with wrinkles based on the predicted wrinkle deformation data.
    Type: Application
    Filed: February 10, 2021
    Publication date: March 16, 2023
    Inventors: Colin HODGES, David GOULD, Mark SAGAR, Tim WU, Sibylle VAN HOVE, Alireza NEJATI, Werner OLLEWAGEN, Xueyuan ZHANG
  • Publication number: 20220382511
    Abstract: A Markup System includes a Rule Processor, and a set of Rules for applying Markup to augment the communication of a Communicative Intent by an Embodied Agent. Markup applied to a Communicative Utterance applies Behaviour Modifiers and/or Elegant Variations to the Communicative Utterances.
    Type: Application
    Filed: July 9, 2020
    Publication date: December 1, 2022
    Inventors: Mark SAGAR, Alistair KNOTT, Rachel LOVE, Robert Jason MUNRO
  • Publication number: 20220358343
    Abstract: Embodiments of architecture, systems, and methods for modeling dynamics between behavior and emotional states in an artificial nervous system are described herein. A computer implemented emotion system of an artificial nervous system for animating a virtual object, digital entity, or robot, is provided, comprising: a plurality of states, each state of the plurality of states representing an emotional state (ES) of the artificial nervous system; a module for processing a plurality of inputs, the processed plurality of inputs applied to the plurality of states. Other embodiments may be described and claimed.
    Type: Application
    Filed: July 3, 2020
    Publication date: November 10, 2022
    Inventor: Mark SAGAR
  • Publication number: 20220358403
    Abstract: Embodiments described herein relate to a method of changing the connectivity of a Cognitive Architecture for animating an Embodied Agent, which may be a virtual object, digital entity, and/or robot, by applying Mask Variables to Connectors linking computational Modules. Mask Variables may turn Connectors on or off—or more flexibly, they may module the strength of Connectors. Operations which apply several Mask Variables at once put the Cognitive Architecture in different Cognitive Modes of behaviour.
    Type: Application
    Filed: July 8, 2020
    Publication date: November 10, 2022
    Inventors: Mark SAGAR, Alistair KNOTT, Martin TAKAC, Xiaohang FU
  • Publication number: 20220358369
    Abstract: Computational structures provide Embodied Agents with memory which can be populated in real time from Experience, and/or or authored. Embodied Agents (which may be virtual objects, digital entities or robots) are provided with one or more Experience Memory Stores which influence or direct the behaviour of the Embodied Agents. An Experience Memory Store may include a Convergence Divergence Zone (CDZ), which simulates the ability of human memory to represent external reality in the form of mental imagery or simulation that can be re-experienced during recall. A Memory Database be generated in a simple, authorable way, enabling Experiences to be learned during live operation of the Embodied Agents or authored. Eligibility-Based Learning determines which aspects from streams of multimodal information are stored in the Experience Memory Store.
    Type: Application
    Filed: July 8, 2020
    Publication date: November 10, 2022
    Inventors: Mark SAGAR, Alistair KNOTT, Martin TAKAC, Xiaohang FU
  • Publication number: 20220222508
    Abstract: Disclosed is a machine-learning model-based chunker (the “Sequencer”) that learns to predict the next element in a sequence and detects the boundary between sequences. At the end of a sequence, a declarative representation of the whole sequence is stored, together with its effect. The effect is measured as the difference between the system states at the end and at the start of the chunk. The Sequencer can be combined with a Planner that works with the Sequencer to recognize what plan a developing incoming sequence can be a part of and thus to predict the next element in that sequence. In embodiments where the effect of a plan is represented by a multi-dimensional vector, with different attentional weights placed on each dimension, the Planner calculates the distance between the desired state and the effects generated by individual plans, weighting its calculation by the attentional foci.
    Type: Application
    Filed: April 30, 2020
    Publication date: July 14, 2022
    Inventors: Martin Takac, Alistair Knott, Mark Sagar
  • Publication number: 20220108510
    Abstract: To realistically animate a String (such as a sentence) a hierarchical search algorithm is provided to search for stored examples (Animation Snippets) of sub-strings of the String, in decreasing order of sub-string length, and concatenate retrieved sub-strings to complete the String of speech animation. In one embodiment, real-time generation of speech animation uses model visemes to predict the animation sequences at onsets of visemes and a look-up table based (data-driven) algorithm to predict the dynamics at transitions of visemes. Specifically posed Model Visemes may be blended with speech animation generated using another method at corresponding time points in the animation when the visemes are to be expressed.
    Type: Application
    Filed: January 27, 2020
    Publication date: April 7, 2022
    Inventors: Mark Sagar, Tim Szu-Hsien Wu, Xiani Tan, Xueyuan Zhang
  • Publication number: 20210390751
    Abstract: A method for creating a model of a virtual object or digital entity is described, the method comprising receiving a plurality of basic shapes for a plurality of models; receiving a plurality of specified modification variables specifying a modification to be made to the basic shapes; and applying the specified modification(s) to the plurality of basic shapes to generate a plurality of modified basic shapes for at least one model.
    Type: Application
    Filed: October 25, 2019
    Publication date: December 16, 2021
    Inventors: Andrew Mark Sagar, Tim Szu-Hsien Wu, Werner Ollewagen, Xiani Tan
  • Patent number: 8279228
    Abstract: A method of animating a digital facial model, the method including: defining a plurality of action units; calibrating each action unit of the plurality of action units via an actor's performance; capturing first facial pose data; determining a plurality of weights, each weight of the plurality of weights uniquely corresponding to the each action unit, the plurality of weights characterizing a weighted combination of the plurality of action units, the weighted combination approximating the first facial pose data; generating a weighted activation by combining the results of applying the each weight to the each action unit; applying the weighted activation to the digital facial model; and recalibrating at least one action unit of the plurality of action units using input user adjustments to the weighted activation.
    Type: Grant
    Filed: January 6, 2011
    Date of Patent: October 2, 2012
    Assignees: Sony Corporation, Sony Pictures Entertainment Inc.
    Inventors: Parag Havaldar, Mark Sagar, Josh Ochoa
  • Publication number: 20110175921
    Abstract: A method of animating a digital facial model, the method including: defining a plurality of action units; calibrating each action unit of the plurality of action units via an actor's performance; capturing first facial pose data; determining a plurality of weights, each weight of the plurality of weights uniquely corresponding to the each action unit, the plurality of weights characterizing a weighted combination of the plurality of action units, the weighted combination approximating the first facial pose data; generating a weighted activation by combining the results of applying the each weight to the each action unit; applying the weighted activation to the digital facial model; and recalibrating at least one action unit of the plurality of action units using input user adjustments to the weighted activation.
    Type: Application
    Filed: January 6, 2011
    Publication date: July 21, 2011
    Applicants: SONY CORPORATION, SONY PICTURES ENTERTAINMENT INC.
    Inventors: Parag Havaldar, Mark Sagar, Josh Ochoa
  • Patent number: 7554549
    Abstract: A motion tracking system enables faithful capture of subtle facial and eye motion using a surface electromyography (EMG) detection method to detect muscle movements and an electrooculogram (EOG) detection method to detect eye movements. An embodiment of the motion tracking animation system comprises a plurality of pairs of EOG electrodes adapted to be affixed to the skin surface of the performer at locations adjacent to the performer's eyes. The EOG data comprises electrical signals corresponding to eye movements of a performer during a performance. Programming instructions further provide processing of the EOG data and mapping of processed EOG data onto an animated character. As a result, the animated character will exhibit he same muscle and eye movements as the performer.
    Type: Grant
    Filed: November 8, 2004
    Date of Patent: June 30, 2009
    Assignees: Sony Corporation, Sony Pictures Entertainment Inc.
    Inventors: Mark Sagar, Remington Scott
  • Publication number: 20060071934
    Abstract: A motion tracking system enables faithful capture of subtle facial and eye motion using a surface electromyography (EMG) detection method to detect muscle movements and an electrooculogram (EOG) detection method to detect eye movements. Signals corresponding to the detected muscle and eye movements are used to control an animated character to exhibit the same movements performed by a performer. An embodiment of the motion tracking animation system comprises a plurality of pairs of EMG electrodes adapted to be affixed to a skin surface of a performer at plural locations corresponding to respective muscles, and a processor operatively coupled to the plurality of pairs of EMG electrodes. The processor includes programming instructions to perform the functions of acquiring EMG data from the plurality of pairs of EMG electrodes. The EMG data comprises electrical signals corresponding to muscle movements of the performer during a performance.
    Type: Application
    Filed: November 8, 2004
    Publication date: April 6, 2006
    Inventors: Mark Sagar, Remington Scott
  • Patent number: 6967658
    Abstract: A method and apparatus is disclosed in which one or more standard faces are transformed into a target face so as to allow expressions corresponding to the standard face(s) to be used as animation vectors by the target face. In particular, a non-linear morphing transformation function is determined between the standard face(s) and the target face. The target face animation vectors are a function of the morphing transformation function and the animation vectors of the standard face(s).
    Type: Grant
    Filed: June 22, 2001
    Date of Patent: November 22, 2005
    Assignee: Auckland UniServices Limited
    Inventors: Peter J. Hunter, Poul F. Nielsen, David Bullivant, Mark Sagar, Paul Charette, Serge LaFontaine
  • Patent number: 6486881
    Abstract: An image processing system in which the vertices of an object contained within an image are analyzed using a singular value decomposition (SVD) method is disclosed. The use of the SVD allows the original vertices data to be reduced through filtering or truncating the singular values associated with the image. In addition, the reduced vertices data and the associated SVD matrices allows for efficiently streaming video data.
    Type: Grant
    Filed: June 15, 2001
    Date of Patent: November 26, 2002
    Assignees: Lifef/x Networks, Inc., Auckland UniServices Limited
    Inventors: Peter J. Hunter, Poul F. Nielsen, David Bullivant, Mark Sagar, Paul Charette, Serge LaFontaine
  • Publication number: 20020041285
    Abstract: A method and apparatus is disclosed in which one or more standard faces are transformed into a target face so as to allow expressions corresponding to the standard face(s) to be used as animation vectors by the target face. In particular, a non-linear morphing transformation function is determined between the standard face(s) and the target face. The target face animation vectors are a function of the morphing transformation function and the animation vectors of the standard face(s).
    Type: Application
    Filed: June 22, 2001
    Publication date: April 11, 2002
    Inventors: Peter J. Hunter, Poul F. Nielsen, David Bullivant, Mark Sagar, Paul Charette, Serge LaFontaine
  • Publication number: 20020039454
    Abstract: An image processing system in which the vertices of an object contained within an image are analyzed using a singular value decomposition (SVD) method is disclosed. The use of the SVD allows the original vertices data to be reduced through filtering or truncating the singular values associated with the image. In addition, the reduced vertices data and the associated SVD matrices allows for efficiently streaming video data.
    Type: Application
    Filed: June 15, 2001
    Publication date: April 4, 2002
    Applicant: LifeF/X Networks, Inc. and Auckland UniServices Li
    Inventors: Peter J. Hunter, Poul F. Nielsen, David Bullivant, Mark Sagar, Paul Charette, Serge LaFontaine
  • Patent number: 6064390
    Abstract: An apparatus and method for representing expression in a tissue-like system, that may include a human face, where the system is particularized to a specified individual. A graphical representation generator implemented in a computer determines a representation, in terms of a finite-element model, of the surface of the tissue of the system, providing a graphic output defining the surface in world coordinates. An expressive detail generator, including a wrinkle generator, modifies the surface determined by the graphical representation generator before the surface has been mapped into world coordinates in accordance with three-dimensional features of the tissue-like system of a particular subject.
    Type: Grant
    Filed: July 25, 1997
    Date of Patent: May 16, 2000
    Assignee: Lifef/x Networks, Inc.
    Inventors: Mark A. Sagar, Ivan Gulas