SYSTEM, METHOD, AND RECORDING MEDIUM FOR CONTROLLING AN OBJECT IN VIRTUAL WORLD
A system and method of controlling characteristics of an avatar in a virtual world may generate avatar control information based on avatar information of the virtual world and a sensor control command expressing a user intent using a sensor-based input device.
Latest Samsung Electronics Patents:
This application claims the benefit of Korean Patent Application No. 10-2010-0041736, filed on May 4, 2010 in the Korean Intellectual Property Office, Korean Patent Application No. 10-2009-0101471, filed on Oct. 23, 2009 in the Korean Intellectual Property Office, and Korean Patent Application No. 10-2009-0040476, filed on May 8, 2009 in the Korean Intellectual Property Office, the disclosures of which are incorporated herein in their entirety by reference.
BACKGROUND1. Field
One or more embodiments relate to a method of controlling a figure of a user of a real world to be adapted to characteristics of an avatar of a virtual world.
2. Description of the Related Art
Recently, interests in expressing users of a real world as avatars of a virtual world are greatly increasing. In particular, a study for a method of controlling to adapt, to the avatars of the virtual world, practical characteristics such as appearances, motions, and the like of the users so that the avatars may be realistically shown has been actively made.
Accordingly, there is a desire for a system and method of controlling characteristics of an avatar of a virtual world.
SUMMARYAccording to an aspect of one or more embodiments, there may be provided a system of controlling characteristics of an avatar, the system including: a sensor control command receiver to receive a sensor control command indicating a user intent via a sensor-based input device; and an avatar control information generator to generate avatar control information based on the sensor control command.
The avatar information may include, as metadata, an identifier (ID) for dientifyign the avatar and an attribute of a family indicating morphological information of the avatar.
The avatar information may include, as metadata, a free direction (FreeDirection) of a move element for defining various behaviors of an avatar animation.
The avatar information may include, as metadata for an avatar appearance, an element of a physical condition (PhysicalCondition) for indicating various expressions of behaviors of the avatar, and may include, as sub-elements of the PhysicalCondition, a body flexibility (BodyFlexibility) and a body strength (BodyStrength).
The avatar information may include metadata defining an avatar face feature point and a body feature point for controlling a facial expression and a motion of the avatar.
According to another aspect of one or more embodiments, there may be provided a method of controlling characteristics of an avatar, the method including: receiving a sensor control command indicating a user intent via a sensor-based input device; and generating avatar control information based on the sensor control command.
According to still another aspect of one or more embodiments, there may be provided a non-transitory computer-readable storage medium storing a metadata structure, wherein an avatar face feature point and a body feature point for controlling a facial expression and a motion of an avatar are defined.
According to yet another aspect of one or more embodiments, there may be provided an imaging apparatus including a storage unit to store an animation clip, animation control information, and control control information, the animation control information including information indicating a part of an avatar the animation clip corresponds to and a priority, and the control control information including information indicating a part of an avatar motion data corresponds to and a priority, the motion data being generated by processing a value received from a motion sensor; and a processing unit to compare a priority of animation control information corresponding to a first part of the avatar with a priority of control control information corresponding to the first part of the avatar, and to determine data to be applicable to the first part of the avatar.
According to a further another aspect of one or more embodiments, there may be provided a non-transitory computer-readable storage medium storing a program implemented in a computer system comprising a processor and a memory, the non-transitory computer-readable storage medium including a first set of instructions to store animation control information and control control information, and a second set of instructions to associate an animation clip and motion data generated from a value received from a motion sensor, based on the animation control information corresponding to each part of an avatar and the control control information. The animation control information may include information associated with a corresponding animation clip, and an identifier indicating the corresponding animation clip corresponds to one of a facial expression, a head, an upper body, a middle body, and a lower body of an avatar, and the control control information may include an identifier indicating real-time motion data corresponds to one of the facial expression, the head, the upper body, the middle body, and the lower body of an avatar.
The above and other features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings in which:
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. Embodiments are described below to explain the present disclosure by referring to the figures.
1. Introduction:
An importance of a virtual environment (VE) in multimedia industries may be gradually increasing. A specification of a VE with respect to other multimedia applications may include a visual expression of a user within the VE. The visual expression may be provided in a form of an avatar, that is, a graphic object that providing other purposes:
-
- Makes the presence of a user in a real world visual in the VE,
- Characterizes the user within the VE,
- interacts with the VE.
In operation 401, avatar information of the adaptation RV engine may be set. In operation 402, a sensor input may be monitored. When a sensor control command occurs in operation 403, a command of the adaptation RV engine may be recognized in operation 404. In operation 405, avatar control information may be generated. In operation 406, an avatar manipulation may be output.
In general, creating an avatar may be a time consuming task. Even though some elements of the avatar may be associated with the VE (for example, the avatar wearing a medieval suit in a contemporary style VE being inappropriate), there may be a real desire to create the avatar once and import and use the created avatar in other VEs. In addition, the avatar may be controlled from external applications. For example, emotions an avatar exposes in the VE may be obtained by processing the associated user's psychological sensors.
Based on two main requirements below, an eXtensible Markup Language (XML) schema used for expressing the avatar may be proposed:
-
- Easily create an importer and an exporter from performances of a variety of VEs,
- Easily control the avatar in the VE.
The proposed scheme may deal with metadata and may not include representation of a texture, geometry, or an animation.
The schema may be obtained based on a study for another virtual human being relating to markup languages together with popular games, tools, and schemes from real presences of the virtual world and content authentication packages
As basic attributes of the avatar, identifier (ID) of identifying each avatar in a virtual reality (VR) space and a family of signifying a type of each avatar may be given. The family may provide information regarding whether the avatar has a form of a human being, a robot, or a specific animal. In this manner, a user may discriminate and manipulate the user's own avatar from an avatar of another user using an ID in the VR space where a plurality of avatars are present, and the family attributes may be applied to various avatars. As optional attributes of the avatar, a name, a gender, and the like may be included.
Elements of the avatar may be configured as data types below:
-
- Appearance: may include a high-level description for the appearance, and refer to media including accurate geometry and texture. Here, ‘PhysicalCondition’ is additionally proposed. The ‘PhysicalCondition’ may include ‘BodyFlexibility’ and ‘BodyStrength’ as its subelements. When defining external characteristics of each avatar, the body flexibility or the body strength may provide information associated with a degree of an avatar expressing a motion. For example, in comparison between an avatar having a high flexibility and an avatar having a low flexibility, the motions of the two avatars may vary depending on a flexibility degree when the same dance, for example, a ballet is performed by the two avatars. As for the body strength, an avatar having a relatively great strength with respect to the same motion may be expressed as being more actively performed. To obtain these effects, the ‘PhysicalCondition’ may be provided as metadata of a subelement of the avatar appearance.
- Animation: may include descriptions about a set of animation sequences performing an avatar, and refer to some media including accurate animation parameters such as geometric transformations. A free direction (FreeDirection) of a move element may be additionally added to existing metadata of the avatar animation. An existing manipulation scheme to move the avatar is limited to up, down, left, and right. In this regard, an item that may be readily manipulated in any direction may be added to diversely provide expression information of moving animation of the avatar.
- Communication skills: may include a set of descriptors providing information to other modalities communicable by the avatar.
- Personality: may include a set of descriptors defining a personality of the avatar.
- Control features: may include a set of facial expressions of the avatar and motion points. Thus, a user may control facial expression and full body motion which are not listed in the descriptors.
Specifically, the appearance may signify a feature of the avatar, and various appearances of the avatar may be defined using appearance information concerning a size, a position, a shape, and the like with respect to eyes, a nose, lips, ears, hair, eyebrows, nails, and the like, of the avatar. The animation may be classified into body gestures (an angry gesture, an agreement gesture, a tired gesture, etc.,) of the avatar such as greeting, dancing, walking, fighting, celebrating, and the like, and meaningless gestures of the avatar such as facial expressions (smiling, crying, surprising, etc.). The communication skills may signify communication capability of the avatar. For example, the communication skills may include communication capability information such that the avatar speaks excellent in Korean as a native language, speaks fluently in English, and speaks a simple greeting in French. The personality may include openness, agreeableness, neuroticism, extraversion, conscientiousness, and the like.
The facial expression and the full body motion among the characteristics of the avatar may be controlled as follows.
When comparing two avatars having physical conditions different from each other, states while or after the two avatars conduct the same task may be different from each other.
A body shape, that is, a skeleton may be configured in a shape of an actual human being based on bones of the human being existing in the real world. For example, the body shape may include left and right clavicle, left and right scapulaes, left and right humerus, left and right radiuses, left and right wrists, left and right hands, left and right thumbs, and the like. Also, the body control expressing movements of the skeleton may reflect movements of respective bones to express movements of the body, and the movements of the respective bones may be controlled using a joint point of each bone. Since the respective bones are connected with each other, neighbouring bones may share the joint point. Thus, starting a pelvis as a reference point, end points far away from the pelvis from among end points of the respective bones may be defined as control points of the respective bones, and non-predefined motions of the avatar may be diversely expressed by moving the control points. For example, motions of the humerus may be controlled based on information associated with a three-dimensional (3D) position, a direction, and a length of a joint point with respect to an elbow. Fingers may be also controlled based on information associated with a 3D position, a direction, and a length of an end point of each joint. Movements of each joint may be controlled based on only the position, or based on the direction and the distance.
In the case of the avatar body control using the above, motions of users of the real world may be recognized using a camera or a motion sensor sensing motions to adapt the recognized motions onto motions of an avatar of the virtual world. The avatar body control may be performed through a process similar to the avatar face control described above with reference to
As described above, according to an embodiment, by means of an avatar feature control signifying characteristics of an avatar, various facial expressions, motions, personalities, and the like of a user may be naturally expressed. For this purpose, a user of a real world may be sensed using a sensing device, for example, a camera, a motion sensor, an infrared light, and the like, to reproduce characteristics of the user to an avatar as is. Accordingly, various figures of users may be naturally adapted onto the avatar of the virtual world.
An active avatar control may be a general parametic model used to track, recognize, and synthesize common features in a data sequence from the sensing device of the real world. For example, a captured full body motion of the user may be transmitted to a system to control a motion of the avatar. Body motion sensing may use a set of wearable or attachable 3D position and posture sensing devices. Thus, a concept of an avatar body control may be added. The concept may signify enabling a full control of the avatar by employing all sensed motions of the user.
The control is not limited to the avatar and thus may be applicable to all the objects existing in the virtual environment. For this, according to an embodiment, an object controlling system may include a control command receiver to receive a control command with respect to an object of a virtual environment, and an object controller to control the object based on the received control command and object information of the object. The object information may include common characteristics of a virtual world object as metadata for the virtual world object, include avatar information as metadata for an avatar, and virtual object information as metadata for a virtual object.
The object information may include common characteristics of a virtual world object. The common characteristics may include, as metadata, at least one element of an Identification for identifying the virtual world object, a virtual world object sound (VWOSound), a virtual world object scent (VWOScent), a virtual world object control (VWOControl), a virtual world object event (VWOEvent), a virtual world object behavior model (VWOBehaviorModel), and virtual world object haptic properties (VWOHapticProperties).
The Identification may include, as an element, at least one of a user identifier (UserID) for identifying a user associated with the virtual world object, an Ownership of the virtual world object, Rights, and Credits, and may include, as an attribute, at least one of a name of the virtual world object and a family with another virtual world object.
The VWOSound may include, as an element, a sound resource uniform resource locator (URL) including at least one link to a sound file, and may include, as an attribute, at least one of a sound identifier (SoundID) that is a unique identifier of an object sound, an intensity indicating a sound strength, a duration indicating a length of time where the sound lasts, a loop indicating a playing option, and a sound name.
The VWOScent may include, as an element, a scent resource URL including at least one link to a scent file, and may include, as an attribute, at least one of a scent identifier (ScentID) that is a unique identifier of an object scent, an intensity indicating a scent strength, a duration indicating a length of time where the scent lasts, a loop indicating a playing option, and a scent name.
The VWOControl may include, as an element, a motion feature control (MotionFeatureControl) that is a set of elements controlling a position, an orientation, and a scale of the virtual world object, and may include, as an attribute, a control identifier (ControllID) that is a unique identifier of control. In this instance, the MotionFeatureControl may include, as an element, at least one of a position of an object in a scene with a three-dimensional (3D) floating point vector, an orientation of the object in a scene with the 3D floating point vector as an Euler angle, and a scale of the object in a scene expressed as the 3D floating point vector.
The VWOEvent may include, as an element, at least one of a Mouse that is a set of mouse event elements, a Keyboard that is a set of keyboard event elements, and a user defined input (UserDefinedInput), and may include, as an attribute, an event identifier (EventID) that is a unique identifier of an event. The Mouse may include, as an element, at least one of a click, double click (Double_Click), a left button down (LeftBttn_down) that is an event taking place at the moment of holding down a left button of a mouse, a left button up (LeftBttn_up) that is an event taking place at the moment of releasing the left button of the mouse, a right button down (RightBttn_down) that is an event taking place at the moment of pushing a right button of the mouse, a right button up (RightBttn_up) that is an event taking place at the moment of releasing the right button of the mouse, and a move that is an event taking place while changing a position of the mouse. Also, the Keyboard may include, as an element, at least one of a key down (Key_Down) that is an event taking place at the moment of holding down a keyboard button and a key up (Key_Up) that is an event taking place at the moment of releasing the keyboard button.
The VWOBehaviorModel may include, as an element, at least one of a behavior input (BehaviorInput) that is an input event for generating an object behavior and a behavior output (BehaviorOutput) that is an object behavior output according to the input event. In this instance, the BehaviorInput may include an EventID as an attribute, and the BehaviorOutput may include, as an attribute, at least one of a SoundID, a ScentID, and an animation identifier (AnimationID).
The VWOHapticProperties may include, as an attribute, at least one of a material property (MaterialProperty) that contains parameters characterizing haptic properties, a dynamic force effect (DynamicForceEffect) that contains parameters characterizing force effects, and a tactile property (TactileProperty) that contains parameters characterizing tactile properties. In this instance, the MaterialProperty may include, as an attribute, at least one of a Stiffness of the virtual world object, a static friction (StaticFriction) of the virtual world object, a dynamic friction (DynamicFriction) of the virtual world object, a Damping of the virtual world object, a Texture containing a link to a haptic texture file, and a mass of the virtual world object. Also, the DynamicForceEffect may include, as an attribute, at least one of a force field (ForceField) containing a link to a force field vector file and a movement trajectory (MovementTrajectory) containing a link to a force trajectory file. Also, the TactileProperty may include, as an attribute, at least one of a Temperature of the virtual world object, a Vibration of the virtual world object, a Current of the virtual world object, and tactile patterns (TactilePatterns) containing a link to a tactile pattern file.
The object information may include avatar information associated with an avatar of a virtual world, and the avatar information may include, as the metadata, at least one element of an avatar appearance (AvatarAppearance), an avatar animation (AvatarAnimation), avatar communication skills (AvatarCommunicationSkills), an avatar personality (AvatarPersonality), avatar control features (AvatarControlFeatures), and avatar common characteristics (AvatarCC), and may include, as an attribute, a Gender of the avatar.
The AvatarAppearance may include, as an element, at least one of a Body, a Head, Eyes, Ears, a Nose, a mouth lip (MouthLip), a Skin, a facial, a Nail, a body look (BodyLook), a Hair, eye brows (EyeBrows), a facial hair (FacialHair), facial calibration points (FacialCalibrationPoints), a physical condition (PhysicalCondition), Clothes, Shoes, Accessories, and an appearance resource (AppearanceResource).
The AvatarAnimation may include at least one element of an Idle, a Greeting, a Dance, a Walk, a Moves, a Fighting, a Hearing, a Smoke, Congratulations, common actions (Common_Actions), specific actions (Specific_Actions), a facial expression (Facial_Expression), a body expression (Body_Expression), and an animation resource (AnimationResource).
The AvatarCommunicationSkills may include, as an element, at least one of an input verbal communication (InputVerbalCommunication), an input nonverbal communication (InputNonVerbalCommunication), an output verbal communication (OutputVerbalCommunication), and an output nonverbal communication (OutputNonVerbalCommunication), and may include, as an attribute, at least one of a Name and a default language (DefaultLanguage). In this instance, a verbal communication including the InputVerbalCommunication and OutputVerbalCommunication may include a language as the element, and may include, as the attribute, at least one of a voice, a text, and the language. The language may include, as an attribute, at least one of a name that is a character string indicating a name of the language and a preference for using the language in the verbal communication. Also, a communication preference including the preference may include a preference level of a communication of the avatar. The language may be set with a communication preference level (CommunicationPreferenceLevel) including a preference level for each language that the avatar is able to speak or understand. Also, a nonverbal communication including the InputNonVerbalCommunication and the OutputNonVerbalCommunication may include, as an element, at least one of a sign language (SignLanguage) and a cued speech communication (CuedSpeechCommumication), and may include, as an attribute, a complementary gesture (ComplementaryGesture). In this instance, the SignLanguage may include a name of a language as an attribute.
The AvatarPersonality may include, as an element, at least one of an openness, a conscientiousness, an extraversion, an agreeableness, and a neuroticism, and may selectively include a name of a personality.
The AvatarControlFeatures may include, as elements, control body features (ControlBodyFeatures) that is a set of elements controlling moves of a body and control face features (ControlFaceFeature) that is a set of elements controlling moves of a face, and may selectively include a name of a control configuration as an attribute.
The ControlBodyFeatures may include, as an element, at least one of head bones (headBones), upper body bones (UpperBodyBones), down body bones (DownBodyBones), and middle body bones (MiddleBodyBones). In this instance, the ControlFaceFeatures may include, as an element, at least one of a head outline (HeadOutline), a left eye outline (LeftEyeOutline), a right eye outline (RightEyeOutline), a left eye brow outline (LeftEyeBrowOutline), a right eye brow outline (RightEyeBrowOutline), a left ear outline (LeftEarOutline), a right ear outline (RightEarOutline), a nose outline (NoseOutline), a mouth lip outline (MouthLipOutline), face points (FacePoints), and miscellaneous points (MiscellaneousPoints), and may selectively include, as an attribute, a name of a face control configuration. In this instance, at least one of elements included in the ControlFaceFeatures may include, as an element, at least one of an outline (Outline4Points) having four points, an outline (Outline5Points) having five points, and an outline (Outline8Points) having eight points, and an outline (Outline14Points) having fourteen points. Also, at least one of elements included in the ControlFaceFeatures may include a basic number of points and may selectively further include an additional point.
The object information may include information associated with a virtual object. Information associated with the virtual object may include, as metadata for expressing a virtual object of the virtual environment, at least one element of a virtual object appearance (VOAppearance), a virtual object animation (VOAnimation), and virtual object common characteristics (VOCC).
When at least one link to an appearance file exists, the VOAppearance may include, as an element, a virtual object URL (VirtualObjectURL) that is an element including the at least one link.
The VOAnimation may include, as an element, at least one of a virtual object motion (VOMotion), a virtual object deformation (VODeformation), and a virtual object additional animation (VOAdditionalAnimation), and may include, as an attribute, at least one of an animation identifier (AnimationID), a Duration that is a length of time where an animation lasts, and a Loop that is a playing option.
Metadata that may be included in the object information will be further described later.
When the object is an avatar, the object controller may control the avatar based on the received control command and metadata defining an avatar face feature point and a body feature point for controlling a facial expression and a motion of the avatar. When the object is an avatar of a virtual world, the control command may be generated by sensing a facial expression and a body motion of a user of a real world. The object controller may control the object to map characteristics of the user to the avatar of the virtual world according to the facial expression and the body motion.
An object controlling method according to an embodiment may include receiving a control command with respect to an object of a virtual environment, and controlling the object based on the received control command and object information of the object. The object information used in the object controlling method may be equivalent to object information used in the object controlling system. In this instance, the controlling may include controlling the avatar based on the received control command and metadata defining an avatar face feature point and a body feature point for controlling a facial expression and a motion of an avatar when the object is the avatar. Also, when the object is an avatar of a virtual world, the control command may be generated by sensing a facial expression and a body motion of a user of a real world, and the controlling may include controlling the object to map characteristics of the user to the avatar of the virtual world according to the facial expression and the body motion.
An object controlling system according to an embodiment may include a control command generator to generate a regularized control command based on information received from a real world device, a control command transmitter to transmit the regularized control command to a virtual world server, and an object controller to control a virtual world object based on information associated with the virtual world object received from the virtual world server. In this instance, the object controlling system according to the present embodiment may perform a function of a single terminal, and an object controlling system according to another embodiment, performing a function of a virtual world server, may include an information generator to generate information associated with a corresponding virtual world object by converting a regularized control command received from a terminal according to the virtual world object, and an information transmitter to transmit information associated with the virtual world object to the terminal. The regularized control command may be generated based on information received by the terminal from a real world device.
An object controlling method according to another embodiment may include generating a regularized control command based on information received from a real world device, transmitting the regularized control command to a virtual world server, and controlling a virtual world object based on information associated with the virtual world object received from the virtual world server. In this instance, the object controlling method according to the present embodiment may be performed by a single terminal, and an object controlling method according to still another embodiment may be performed by a virtual world server. Specifically, the object controlling method performed by the virtual world, server may include generating information associated with a corresponding virtual world object by converting a regularized control command received from a terminal according to the virtual world object, and transmitting information associated with the virtual world object to the terminal. The regularized control command may be generated based on information received by the terminal from a real world device.
An object controlling system according to still another embodiment may include an information transmitter to transmit, to a virtual world server, information received from a real world device, and an object controller to control a virtual world object based on information associated with the virtual world object that is received from the virtual world server according to the transmitted information. In this instance, the object controlling system according to the present embodiment may perform a function of a single terminal, and an object controlling system according to yet another embodiment, performing a function of a virtual world server, may include a control command generator to generate a regularized control command based on information received from a terminal, an information generator to generate information associated with a corresponding virtual world object by converting the regularized control command according to the virtual world object, and an information transmitter to transmit information associated with the virtual world object to the terminal. The received information may include information received by the terminal from a real world device.
An object controlling method according to yet another embodiment may include transmitting, to a virtual world server, information received from a real world device, and controlling a virtual world object based on information associated with the virtual world object that is received from the virtual world server according to the transmitted information. In this instance, the object controlling method according to the present embodiment may be performed by a single terminal, and an object controlling method according to a further another embodiment may be performed by a virtual world server. The object controlling method performed by the virtual world server may include generating a regularized control command based on information received from a terminal, generating information associated with a corresponding virtual world object by converting the regularized control command according to the virtual world object, and transmitting information associated with the virtual world object to the terminal. The received information may include information received by the terminal from a real world device.
An object controlling system according to a further another embodiment may include a control command generator to generate a regularized control command based on information received from a real world device, an information generator to generate information associated with a corresponding virtual world object by converting the regularized control command according to the virtual world object, and an object controller to control the virtual world object based on information associated with the virtual world object.
An object controlling method according to still another embodiment may include generating a regularized control command based on information received from a real world device, generating information associated with a corresponding virtual world object by converting the regularized control command according to the virtual world object, and controlling the virtual world object based on information associated with the virtual world object.
An object controlling system according to still another embodiment may include a control command generator to generate a regularized control command based on information received from a real world device, an information generator to generate information associated with a corresponding virtual world object by converting the regularized control command according to the virtual world object, an information exchanging unit to exchange information associated with the virtual world object with information associated with a virtual world object of another object controlling system, and an object controller to control the virtual world object based on information associated with the virtual world object and the exchanged information associated with the virtual world object of the other virtual world object.
An object controlling method according to still another embodiment may include generating a regularized control command based on information received from a real world device, generating information associated with a corresponding virtual world object by converting the regularized control command according to the virtual world object, exchanging information associated with the virtual world object with information associated with a virtual world object of another object controlling system, and controlling the virtual world object based on information associated with the virtual world object and the exchanged information associated with the virtual world object of the other virtual world object.
An object controlling system according to still another embodiment may include an information generator to generate information associated with a virtual world object based on information received from a real world device and virtual world information received from a virtual world server, an object controller to control the virtual world object based on information associated with the virtual world object, and a processing result transmitter to transmit, to the virtual world server, a processing result according to controlling of the virtual world object. In this instance, the object controlling system according to the present embodiment may perform a function of a single terminal, and an object controlling system according to still another embodiment, performing a function of a virtual world server, may include an information transmitter to transmit virtual world information to a terminal, and an information update unit to update the virtual world information based on a processing result received from the terminal. The processing result may include a control result of a virtual world object based on information received by the terminal from a real world device, and the virtual world information.
An object controlling method according to still another embodiment may include generating information associated with a virtual world object based on information received from a real world device and virtual world information received from a virtual world server, controlling the virtual world object based on information associated with the virtual world object, and transmitting, to the virtual world server, a processing result according to controlling of the virtual world object. In this instance, the object controlling method according to the present embodiment may be performed by a single terminal, and an object controlling method according to still another embodiment may be performed by a virtual world server. The object controlling method performed by the virtual world server may include transmitting virtual world information to a terminal, and updating the virtual world information based on a processing result received from the terminal. The processing result may include a control result of a virtual world object based on information received by the terminal from a real world device, and the virtual world information.
The object controller according to one or more embodiments may control the virtual world object by generating a control command based on information associated with the virtual world object and transmitting the generated control command to a display.
2. Virtual World Object Metadata
2.1 Types of Metadata
A specification of Virtual Environments (VEs) with respect to other multimedia applications may lie in the representation of virtual world objects inside the environment.
The “virtual world object” may be classified into two types, such as avatars and virtual objects. An avatar may be used as a (visual) representation of the user inside the environment. These virtual world objects serve different purposes:
-
- characterize various kinds of objects within the VE,
- provide an interaction with the VE.
In general, creating an object is a time consuming task. Even though some components of the object may be related to the VE (for example, the avatar wearing a medieval suit in a contemporary style VE may be inappropriate), there may be a real need of being able to create the object once and import/use it in different VEs. In addition, the object may be controlled from external applications. For example, the emotions one avatar exposes in the VE can be obtained by processing the associated user's physiological sensors.
The current standard proposes an XML Schema, called Virtual World Object Characteristics XSD, for describing an object by considering three main requirements:
-
- it should be possible to easily create importers and exporters from various VEs implementations,
- it should be easy to control an object within an VE,
- it should be possible to modify a local template of the object by using data contained in Virtual World Object Characteristics file.
The proposed schema may deal only with metadata and may not include representation of a geometry, a sound, a scent, an animation, or a texture. To represent the latter, references to media resources are used.
There are common types of attributes and characteristics of the virtual world objects which are shared by both avatars and the virtual objects.
The common associated attributes and characteristics are composed of following type of data:
-
- Identity: contains identification descriptors.
- Sound: contains sound resources and the related properties.
- Scent: contains scent resources and the related properties.
- Control: contains a set of descriptors for controlling motion features of an object such as translation, orientation and scaling.
- Event: contains a set of descriptors providing input events from a mouse, keyboard and etc.
- Behaviour Model: contains a set of descriptors defining the behavior information of the object according to input events.
- Haptic Properties: contains a set of high level descriptors of the haptic properties.
The common characteristics and attributes are inherited to both avatar metadata and virtual object metadata to extend the specific aspects of each of metadata.
2.2 Virtual World Object Common Characteristics
2.2.1 CommonCharacteristicsType
2.2.1.1 Syntax
2.2.1.2 Semantics
Table 2 below shows semantics of the CommonCharacteristicsType.
2.2.2 IdentificationType
2.2.2.1 Syntax
2.2.2.2 Semantics
Table 4 shows semantics of the IdentificationType.
2.2.3 VWO(Virtual World Object)SoundType
2.2.3.1 Syntax
2.2.3.2 Semantics
Table 6 shows semantics of the VWOSoundType.
2.2.3.3 Examples:
Table 7 shows the description of the sound information associated to an object with the following semantics. The sound resource whose name is “BigAlarm” is saved at “http://sounddb.com/alarmsound—0001.wav” and the value of SoundID, its identifier is “3.” The length of the sound is 30 seconds. The sound shall be played with the volume of intensity=“50%” repeatedly.
2.2.4 VWOScentType
2.2.4.1 Syntax
2.2.4.2 Semantics
Table 9 shows semantics of the VWOScentType.
2.2.4.3 Examples
Table 10 shows the description of the scent information associated to the object. The scent resource whose name is “rose” is saved at “http://scentdb.com/flower—0001.sct” and the value of ScentID, its identifier is “5.” The intensity shall be 20% with duration of 20 seconds.
2.2.5 VWOControlType
2.2.5.1 Syntax
2.2.5.2 Semantics
Table 12 shows semantics of the VWOControlType.
Note: Levels of controls: entire object, part of the object
Note: If two controllers are associated to the same object but on different parts of the object and if these parts exist hierarchical structures (parent and children relationship) then the relative motion of the children should be performed. If the controllers are associated with the same part, the controller does the scaling or similar effects for the entire object.
2.2.5.3 Examples
Table 13 shows the description of object control information with the following semantics. The motion feature control of changing a position is given and its value of ControllD, its identifier is “7.” The object shall be positioned at DistanceX=“122.0”, DistanceY=“150.0” and DistanceZ=“40.0”.
2.2.6 VWOEventType
2.2.6.1 Syntax
2.2.6.2 Semantics
Table 15 shows semantics of the VWOEventType.
2.2.6.3 Examples
Table 16 shows the description of an object event with the following semantics. The mouse as an input device produces new input value, “click.” For identifying this input, the value of EventID is “3.”
2.2.7 VWOBehaviourModelType
2.2.7.1 Syntax
2.2.7.2 Semantics
Table 18 shows semantics of the VWOBehaviourModelType.
2.2.7.3 Examples
Table 19 shows the description of a VWO behavior model with the following semantics. If EventID=“1” is given as BehaviorInput, then BehaviorOutput shall be executed related to SoundID=“5” and AnimationID=“4.”
2.2.8 VWOHapticPropertyType
2.2.8.1 Syntax
2.2.8.2 Semantics
Table 21 shows semantics of the VWOHapticPropertyType.
2.2.8.3 MaterialPropertyType
2.2.8.3.1 Syntax
2.2.8.3.2 Semantics
Table 23 shows semantics of the MaterialPropertyType.
2.2.8.3.3 Examples
Table 24 shows the material properties of a virtual world object which has 0.5 N/mm of stiffness, 0.3 of static coefficient of friction, 0.02 of kinetic coefficient of friction, 0.001 damping coefficient, 0.7 of mass and its surface haptic texture is loaded from the given URL.
2.2.8.4 DynamicForceEffectType
2.2.8.4.1 Syntax
2.2.8.4.2 Semantics
Table 26 shows semantics of the DynamicForceEffectType.
2.2.8.4.3 Examples:
Table 27 shows the dynamic force effect of an avatar. The force field characteristic of the avatar is determined by the designed force field file from the URL.
2.2.8.5 TactileType
2.2.8.5.1 Syntax
2.2.8.5.2 Semantics
Table 29 shows semantics of the TactileType.
2.2.8.5.3 Examples
Table 30 shows the tactile properties of an avatar which has 15 degrees of temperature, tactile effect based on the tactile information from the following URL (http://www.haptic.kr/avatar/tactile1.avi).
3. Avatar Metadata
3.1 Type of Avatar Metadata
Avatar metadata as a (visual) representation of the user inside the environment serves the following purposes:
-
- makes visible the presence of a real user into the VE,
- characterizes the user within the VE,
- provides interaction with the VE.
The “Avatar” element may include the following types of data in addition to the common characteristics type of virtual world object:
Avatar Appearance: contains the high level description of the appearance and may refer to a media containing the exact geometry and texture,
-
- Avatar Animation: contains the description of a set of animation sequences that the avatar is able to perform and may refer to several medias containing the exact (geometric transformations) animation parameters,
- Avatar Communication Skills: contains a set of descriptors providing information on the different modalities an avatar is able to communicate,
- Avatar Personality: contains a set of descriptors defining the personality of the avatar,
- Avatar Control Features: contains a set of descriptors defining possible place-holders for sensors on body skeleton and face feature points.
3.2 Avatar Characteristics XSD
3.2.1 AvatarType
3.2.1.1 Syntax
3.2.1.2 Semantics
Table 32 shows semantics of the AvatarType.
3.2.2 AvatarAppearanceType
3.2.2.1. Syntax
3.2.2.2. Semantics
Table 34 shows semantics of the AvatarAppearanceType.
3.2.2.3 PhysicalConditionType
3.2.2.3.1. Syntax
3.2.2.3.2. Semantics
Table 36 shows semantics of the PhysicalConditionType.
3.2.3 AvatarAnimationType
3.2.3.1 Syntax
3.2.3.2 Semantics
Table 38 shows semantics of the AvatarAnimationType.
3.2.3.3 Examples
Table 39 shows the description of avatar animation information with the following semantics. Among all animations, idle at default, saluting greeting, bow, dance, and salsa dance are given. The animation resources are saved at “http://avatarAnimationdb.com/default_idle.bvh”, “http://avatarAnimationdb.com/salutes.bvh”, “http://avatarAnimationdb.com/bowing.bvh”, “http://avatarAnimationdb.com/dancing.bvh”, and “http://avatarAnimationdb.com/salsa.bvh”.
3.2.4 AvatarCommunicationSkillsType
This element defines the communication skills of the avatar in relation to other avatars.
3.2.4.1 Syntax
3.2.4.2 Semantics
Table 40 describes the virtual world and the avatars that can adapt their inputs and outputs to these preferences (having a balance with their own preferences, too). All inputs and outputs will be individually adapted for each avatar.
The communication preferences are defined by means of two input and two output channels that guaranty multimodality. They are the verbal and nonverbal recognition as input, and the verbal and nonverbal performance as output. These channels can be specified as “enabled” and “disabled”. All channels “enabled” imply an avatar is able to speak, to perform gestures and to recognize speech and gestures.
In verbal performance and verbal recognition channels the preference for using the channel via text or via voice can be specified.
The nonverbal performance and nonverbal recognition channels specify the types of gesturing: “Nonverbal language”, “sign language” and “cued speech communication”.
All the features dependent on the language (speaking via text or voice, speaking recognition via text or voice, and sign/cued language use/recognition) use a language attribute for defining the concrete language skills.
Table 41 shows semantics of the AvatarCommunicationSkillsType.
The DefaultLanguage attribute specifies the avatar's preferred language for all the communication channels (it will be generally its native language). For each communication channel other languages that override this preference can be specified.
3.2.4.3 VerbalCommunicationType
3.2.4.3.1 Syntax
3.2.4.3.2 Semantics
Table 43 shows semantics of the VerbalCommunicationType.
The above Table 43 specifies the avatar's verbal communication skills. Voice and text can be defined as enabled, disabled or preferred in order to specify what the preferred verbal mode is and the availability of the other.
Optional tag ‘Language’ defines the preferred language for verbal communication. If it is not specified, the value of the attribute DefaultLanguage defined in the CommunicationSkills tag will be applied.
3.2.4.3.3 LanguageType
3.2.4.3.3.1 Syntax
3.2.4.3.3.2 Semantics
Table 45 shows semantics of the LanguageType.
Table 45 defines secondary communication skills for VerbalCommunication. In case it is not possible to use the preferred language (or the default language) defined for communicating with other avatar, these secondary languages will be applied.
3.2.4.3.3.3 CommunicationPreferenceType
3.2.4.3.3.3.1 Syntax
Table 46 shows a syntax of a CommunicationPreferenceType.
3.2.4.3.3.3.2 Semantics
Table 47 shows semantics of the CommunicationPreferenceType.
3.2.4.3.4 CommunicationPreferenceLevelType
3.2.4.3.4.1 Syntax
Table 48 shows a syntax of a Communication PreferenceLevelType.
3.2.4.3.4.2 Semantics
Table 49 shows semantics of Communication PreferenceLevelType.
3.2.4.4 NonVerbalCommunicationType
3.2.4.4.1 Syntax
3.2.4.4.2 Semantics
Table 51 shows semantics of the NonVerbalCommunicationType.
3.2.4.4.3 SignLanguageType
3.2.4.4.3.1 Syntax
3.2.4.4.3.2 Semantics
Table 53 shows semantics of the SignLanguageType.
Table 53 defines secondary communication skills for NonVerbalCommunication (sign or cued communication). In case it is not possible to use the preferred language (or the default language), these secondary languages will be applied.
3.2.5 AvatarPersonalityType
3.2.5.1 Syntax
3.2.5.2 Semantics
This tag defines the personality of the avatar. This definition is based on the OCEAN model, consisting in a set of characteristics that personality is composed of. A combination of these characteristics is a specific personality. Therefore, an avatar contains a subtag for each attribute defined in OCEAN's model. They are: openness, conscientiousness, extraversion, agreeableness, and neuroticism.
The purpose of this tag is to provide the possibility to define the avatar personality that is desired, and that the architecture of the virtual world can interpret as the inhabitant wishes. It would be able to adapt the avatar's verbal and nonverbal communication to this personality. Moreover, emotions and moods that could be provoked by virtual world events, avatar-avatar communication or the real time flow, will be modulated by this base personality.
Table 55 shows semantics of the AvatarPersonalityType.
3.2.6 AvatarControlFeaturesType
3.2.6.1 Syntax
3.2.6.2 Semantics
Table 57 shows semantics of the AvatarControlFeaturesType.
3.2.6.3 Examples
Table 58 shows the description of controlling body and face features with the following semantics. The features control is given and works as a container.
3.2.6.4 ControlBodyFeaturesType
3.2.6.4.1 Syntax
3.2.6.4.2 Semantics
Table 60 shows semantics of the ControlBodyFeaturesType.
3.2.6.4.3 Examples
Table 61 shows the description of controlling body features with the following semantics. The body features control maps the user defined body feature points to the placeholders. Table 62 shows a set of the feature points that are mapped to the placeholders defined in the semantics.
3.2.6.5 ControlFaceFeaturesType
3.2.6.5.1 Syntax
3.2.6.5.2 Semantics
Table 64 shows semantics of the Control FaceFeaturesType.
3.2.6.5.3 OutlineType
3.2.6.5.3.1 Syntax
3.2.6.5.3.2 Semantics
Table 66 shows semantics of the OutlineType. The OutlineType contains 5 different types of outline depending upon the number of points forming the outline.
3.2.6.5.3.3 Outline4PointsType
3.2.6.5.3.3.1 Syntax
3.2.6.5.3.3.2 Semantics
Table 68 shows semantics of the Outline4PointsType. The points are numbered from the leftmost point proceeding counter-clockwise. For example, if there are 4 points at the left, top, right, bottom of the outline, they are Point1, Point2, Point3, Point4, respectively.
3.2.6.5.3.4 Outline5PointsType
3.2.6.5.3.4.1 Syntax
3.2.6.5.3.4.2 Semantics
Table 70 shows semantics of the Outline5PointsType. The points are numbered from the leftmost point proceeding counter-clockwise.
3.2.6.5.3.5 Outline8PointsType
3.2.6.5.3.5.1 Syntax
3.2.6.5.3.5.2 Semantics
Table 72 shows semantics of the Outline8PointsType. The points are numbered from the leftmost point proceeding counter-clockwise.
3.2.6.5.3.6 Outline14Points
3.2.6.5.3.6.1 Syntax
3.2.6.5.3.6.2 Semantics
Table 74 shows semantics of the Outline14Points. The points are numbered from the leftmost point proceeding counter-clockwise.
3.2.6.5.4 Examples
Table 75 shows the description of controlling face features with the following semantics. The face features control maps the user defined face feature points to the placeholders. Table 76 shows a set of the feature points that are mapped to the placeholders defined in the semantics.
4 Virtual Object Metadata
4.1 Type of Virtual Object Metadata
Virtual object metadata as a (visual) representation of virtual objects inside the environment serves the following purposes:
-
- characterizes various kinds of objects within the VE,
- provides an interaction between virtual object and avatar,
- provides an interaction with the VE.
The “virtual object” element may include the following type of data in addition to the common associated type of virtual world object characteristics:
-
- VO Appearance: contains the high level description of the appearance and may refer a media containing the exact geometry, texture and haptic properties,
- VO Animation: contains the description of a set of animation sequences that the object is able to perform and may refer to several media containing the exact (geometric transformations and deformations) animation parameters.
4.2 XSD
4.2.1 VirtualObjectType
4.2.1.1 Syntax
4.2.1.2 Semantics
Table 78 shows semantics of the VirtualObjectType.
4.2.2 VOAppearanceType
4.2.2.1 Syntax
4.2.2.2 Semantics
Table 80 shows semantics of the VOAppearanceType.
4.2.2.3 Examples
Table 81 shows the resource of a virtual object appearance with the following semantics. The VirtualObjectURL provides location information where the virtual object model is saved. The example shows when VirtualObjectURL value is http://3DmodelDb.com/object—0001.3ds.
4.2.3 VOAnimationType
4.2.3.1 Syntax
4.2.3.2 Semantics
Table 83 shows semantics of the VOAnimationType.
4.2.3.3 Examples
Table 84 shows the description of object animation information with the following semantics. Among all animations, motion type animation of turning 360° is given. The animation resource is saved at “http://voAnimationdb.com/turn—360.bvh” and the value of AnimationID, its identifier is “3.” The intensity shall be played once with duration of 30.
The sensor control command receiver 5110 may receive a sensor control command representing a user intent via a sensor-based input device. The sensor-based input device may correspond to the sensor-based input device 101 of
The avatar control information generator 5120 may generate avatar control information based on avatar information of the virtual world and the sensor control command. The avatar control information may include information used to map characteristics of the users onto the avatar of the virtual world according to the sensed facial expressions and body expressions.
The avatar information may include common characteristics of a virtual world object. The common characteristics may include, as metadata, at least one element of an Identification for identifying the virtual world object, a VWOSound, a VWOScent, a VWOControl, a VWOEvent, a VWOBehaviorModel, and VWOHapticProperties.
The Identification may include, as an element, at least one of a UserID for identifying a user associated with the virtual world object, an Ownership of the virtual world object, Rights, and Credits, and may include, as an attribute, at least one of a name of the virtual world object and a family with another virtual world object.
The VWOSound may include, as an element, a sound resource URL including at least one link to a sound file, and may include, as an attribute, at least one of a SoundID that is a unique identifier of an object sound, an intensity indicating a sound strength, a duration indicating a length of time where the sound lasts, a loop indicating a playing option, and a sound name.
The VWOScent may include, as an element, a scent resource URL including at least one link to a scent file, and may include, as an attribute, at least one of a ScentID that is a unique identifier of an object scent, an intensity indicating a scent strength, a duration indicating a length of time where the scent lasts, a loop indicating a playing option, and a scent name.
The VWOControl may include, as an element, a MotionFeatureControl that is a set of elements controlling a position, an orientation, and a scale of the virtual world object, and may include, as an attribute, a ControlID that is a unique identifier of control. In this instance, the MotionFeatureControl may include, as an element, at least one of a position of an object in a scene with a 3D floating point vector, an orientation of the object in a scene with the 3D floating point vector as an Euler angle, and a scale of the object in a scene expressed as the 3D floating point vector.
The VWOEvent may include, as an element, at least one of a Mouse that is a set of mouse event elements, a Keyboard that is a set of keyboard event elements, and a UserDefinedInput, and may include, as an attribute, an EventID that is a unique identifier of an event. The Mouse may include, as an element, at least one of a click, Double_Click, a LeftBttn_down that is an event taking place at the moment of holding down a left button of a mouse, a LeftBttn_up that is an event taking place at the moment of releasing the left button of the mouse, a RightBttn_down that is an event taking place at the moment of pushing a right button of the mouse, a RightBttn_up that is an event taking place at the moment of releasing the right button of the mouse, and a move that is an event taking place while changing a position of the mouse. Also, the Keyboard may include, as an element, at least one of a Key_Down that is an event taking place at the moment of holding down a keyboard button and a Key_Up that is an event taking place at the moment of releasing the keyboard button.
The VWOBehaviorModel may include, as an element, at least one of a BehaviorInput that is an input event for generating an object behavior and a BehaviorOutput that is an object behavior output according to the input event. In this instance, the BehaviorInput may include an EventID as an attribute, and the BehaviorOutput may include, as an attribute, at least one of a SoundID, a ScentID, and an AnimationID.
The VWOHapticProperties may include, as an attribute, at least one of a MaterialProperty that contains parameters characterizing haptic properties, a DynamicForceEffect that contains parameters characterizing force effects, and a TactileProperty that contains parameters characterizing tactile properties. In this instance, the MaterialProperty may include, as an attribute, at least one of a Stiffness of the virtual world object, a StaticFriction of the virtual world object, a DynamicFriction of the virtual world object, a Damping of the virtual world object, a Texture containing a link to a haptic texture file, and a mass of the virtual world object. Also, the DynamicForceEffect may include, as an attribute, at least one of a ForceField containing a link to a force field vector file and a MovementTrajectory containing a link to a force trajectory file. Also, the TactileProperty may include, as an attribute, at least one of a Temperature of the virtual world object, a Vibration of the virtual world object, a Current of the virtual world object, and TactilePatterns containing a link to a tactile pattern file.
The object information may include avatar information associated with an avatar of a virtual world, and the avatar information may include, as the metadata, at least one element of an AvatarAppearance, an AvatarAnimation, AvatarCommunicationSkills, an AvatarPersonality, AvatarControlFeatures, and AvatarCC, and may include, as an attribute, a Gender of the avatar.
The AvatarAppearance may include, as an element, at least one of a Body, a Head, Eyes, Ears, a Nose, a mouth lip (MouthLip), a Skin, a facial, a Nail, a BodyLook, a Hair, EyeBrows, a FacialHair, FacialCalibrationPoints, a PhysicalCondition, Clothes, Shoes, Accessories, and an AppearanceResource.
The AvatarAnimation may include at least one element of an Idle, a Greeting, a Dance, a Walk, a Moves, a Fighting, a Hearing, a Smoke, Congratulations, Common_Actions, Specific_Actions, a Facial_Expression, a Body_Expression, and an Animation Resource.
The AvatarCommunicationSkills may include, as an element, at least one of an InputVerbalCommunication, an InputNonVerbalCommunication, an OutputVerbalCommunication, and an OutputNonVerbalCommunication, and may include, as an attribute, at least one of a Name and a DefaultLanguage. In this instance, a verbal communication including the InputVerbalCommunication and OutputVerbalCommunication may include a language as the element, and may include, as the attribute, at least one of a voice, a text, and the language. The language may include, as an attribute, at least one of a name that is a character string indicating a name of the language and a preference for using the language in the verbal communication. Also, a communication preference including the preference may include a preference level of a communication of the avatar. The language may be set with a CommunicationPreferenceLevel including a preference level for each language that the avatar is able to speak or understand. Also, a nonverbal communication including the InputNonVerbalCommunication and the OutputNonVerbalCommunication may include, as an element, at least one of a SignLanguage and a CuedSpeechCommumication, and may include, as an attribute, a ComplementaryGesture. In this instance, the SignLanguage may include a name of a language as an attribute.
The AvatarPersonality may include, as an element, at least one of an openness, a conscientiousness, an extraversion, an agreeableness, and a neuroticism, and may selectively include a name of a personality.
The AvatarControlFeatures may include, as elements, ControlBodyFeatures that is a set of elements controlling moves of a body and ControlFaceFeatures that is a set of elements controlling moves of a face, and may selectively include a name of a control configuration as an attribute.
The ControlBodyFeatures may include, as an element, at least one of headBones, UpperBodyBones, Down BodyBones, and MiddleBodyBones. In this instance, the ControlFaceFeatures may include, as an element, at least one of a HeadOutline, a LeftEyeOutline, a RightEyeOutline, a LeftEyeBrowOutline, a RightEyeBrowOutline, a LeftEarOutline, a RightEarOutline, a NoseOutline, a MouthLipOutline, FacePoints, and MiscellaneousPoints, and may selectively include, as an attribute, a name of a face control configuration. In this instance, at least one of elements included in the ControlFaceFeatures may include, as an element, at least one of an Outline4Points having four points, an Outline5Points having five points, and an Outline8Points having eight points, and an Outline14Points having fourteen points. Also, at least one of elements included in the ControlFaceFeatures may include a basic number of points and may selectively further include an additional point.
The object information may include information associated with a virtual object. Information associated with the virtual object may include, as metadata for expressing a virtual object of the virtual environment, at least one element of a VOAppearance, a virtual VOAnimation, and VOCC.
When at least one link to an appearance file exists, the VOAppearance may include, as an element, a VirtualObjectURL that is an element including the at least one link.
The VOAnimation may include, as an element, at least one of a VOMotion, a VODeformation, and a VOAdditionalAnimation, and may include, as an attribute, at least one of an AnimationID, a Duration that is a length of time where an animation lasts, and a Loop that is a playing option.
The above avatar information may refer to descriptions made above with reference to
The avatar control information generator 5120 may generate avatar control information that is used to control characteristics of the users to be mapped onto the avatar of the virtual world based on the avatar information and the sensor control command. The sensor control command may be generated by sensing facial expressions and body motions of the users of the real world. The avatar characteristic controlling system 5100 may directly manipulate the avatar based on the avatar control information, or may transmit the avatar control information to a separate system of manipulating the avatar. When the avatar characteristic controlling system 5100 directly manipulates the avatar, the avatar characteristic controlling system 5100 may further include an avatar manipulation unit 5130.
The avatar manipulation unit 5130 may manipulate the avatar of the virtual world based on the avatar control information. As described above, the avatar control information may be used to control characteristics of the users to be mapped onto the avatar of the virtual world. Therefore, the avatar manipulation unit 5130 may manipulate the user intent of the real world to be adapted to the avatar of the virtual world based on the avatar control information.
In operation 5210, the avatar characteristic controlling system 5100 may receive a sensor user command representing the user intent through a sensor-based input device. The sensor-based input device may correspond to the sensor-based input device 101 of
In operation 5220, the avatar characteristic controlling system 5100 may generate avatar control information based on the avatar of the virtual world information and the sensor control information. The avatar control information may include information that is used to map characteristics of the users to be mapped to the avatar of the virtual world according to the facial expressions and the body motions.
The avatar information may include common characteristics of a virtual world object. The common characteristics may include, as metadata, at least one element of an Identification for identifying the virtual world object, a VWOSound, a VWOScent, a VWOControl, a VWOEvent, a VWOBehaviorModel, and VWOHapticProperties.
The Identification may include, as an element, at least one of a UserID for identifying a user associated with the virtual world object, an Ownership of the virtual world object, Rights, and Credits, and may include, as an attribute, at least one of a name of the virtual world object and a family with another virtual world object.
The VWOSound may include, as an element, a sound resource URL including at least one link to a sound file, and may include, as an attribute, at least one of a SoundID that is a unique identifier of an object sound, an intensity indicating a sound strength, a duration indicating a length of time where the sound lasts, a loop indicating a playing option, and a sound name.
The VWOScent may include, as an element, a scent resource URL including at least one link to a scent file, and may include, as an attribute, at least one of a ScentID that is a unique identifier of an object scent, an intensity indicating a scent strength, a duration indicating a length of time where the scent lasts, a loop indicating a playing option, and a scent name.
The VWOControl may include, as an element, a MotionFeatureControl that is a set of elements controlling a position, an orientation, and a scale of the virtual world object, and may include, as an attribute, a ControlID that is a unique identifier of control. In this instance, the MotionFeatureControl may include, as an element, at least one of a position of an object in a scene with a 3D floating point vector, an orientation of the object in a scene with the 3D floating point vector as an Euler angle, and a scale of the object in a scene expressed as the 3D floating point vector.
The VWOEvent may include, as an element, at least one of a Mouse that is a set of mouse event elements, a Keyboard that is a set of keyboard event elements, and a UserDefinedInput, and may include, as an attribute, an EventID that is a unique identifier of an event. The Mouse may include, as an element, at least one of a click, Double_Click, a LeftBttn_down that is an event taking place at the moment of holding down a left button of a mouse, a LeftBttn_up that is an event taking place at the moment of releasing the left button of the mouse, a RightBttn_down that is an event taking place at the moment of pushing a right button of the mouse, a RightBttn_up that is an event taking place at the moment of releasing the right button of the mouse, and a move that is an event taking place while changing a position of the mouse. Also, the Keyboard may include, as an element, at least one of a Key_Down that is an event taking place at the moment of holding down a keyboard button and a Key_Up that is an event taking place at the moment of releasing the keyboard button.
The VWOBehaviorModel may include, as an element, at least one of a BehaviorInput that is an input event for generating an object behavior and a BehaviorOutput that is an object behavior output according to the input event. In this instance, the BehaviorInput may include an EventID as an attribute, and the BehaviorOutput may include, as an attribute, at least one of a SoundID, a ScentID, and an AnimationID.
The VWOHapticProperties may include, as an attribute, at least one of a MaterialProperty that contains parameters characterizing haptic properties, a DynamicForceEffect that contains parameters characterizing force effects, and a TactileProperty that contains parameters characterizing tactile properties. In this instance, the MaterialProperty may include, as an attribute, at least one of a Stiffness of the virtual world object, a StaticFriction of the virtual world object, a DynamicFriction of the virtual world object, a Damping of the virtual world object, a Texture containing a link to a haptic texture file, and a mass of the virtual world object. Also, the DynamicForceEffect may include, as an attribute, at least one of a ForceField containing a link to a force field vector file and a MovementTrajectory containing a link to a force trajectory file. Also, the TactileProperty may include, as an attribute, at least one of a Temperature of the virtual world object, a Vibration of the virtual world object, a Current of the virtual world object, and TactilePatterns containing a link to a tactile pattern file.
The object information may include avatar information associated with an avatar of a virtual world, and the avatar information may include, as the metadata, at least one element of an AvatarAppearance, an AvatarAnimation, AvatarCommunicationSkills, an AvatarPersonality, AvatarControlFeatures, and AvatarCC.
The AvatarAppearance may include, as an element, at least one of a Body, a Head, Eyes, Ears, a Nose, a mouth lip (MouthLip), a Skin, a facial, a Nail, a BodyLook, a Hair, EyeBrows, a FacialHair, FacialCalibrationPoints, a PhysicalCondition, Clothes, Shoes, Accessories, and an AppearanceResource.
The AvatarAnimation may include at least one element of an Idle, a Greeting, a Dance, a Walk, a Moves, a Fighting, a Hearing, a Smoke, Congratulations, Common_Actions, Specific_Actions, a Facial_Expression, a Body_Expression, and an AnimationResource.
The AvatarCommunicationSkills may include, as an element, at least one of an InputVerbalCommunication, an InputNonVerbalCommunication, an OutputVerbalCommunication, and an OutputNonVerbalCommunication, and may include, as an attribute, at least one of a Name and a DefaultLanguage. In this instance, a verbal communication including the InputVerbalCommunication and OutputVerbalCommunication may include a language as the element, and may include, as the attribute, at least one of a voice, a text, and the language. The language may include, as an attribute, at least one of a name that is a character string indicating a name of the language and a preference for using the language in the verbal communication. Also, a communication preference including the preference may include a preference level of a communication of the avatar. The language may be set with a CommunicationPreferenceLevel including a preference level for each language that the avatar is able to speak or understand. Also, a nonverbal communication including the InputNonVerbalCommunication and the OutputNonVerbalCommunication may include, as an element, at least one of a SignLanguage and a CuedSpeechCommumication, and may include, as an attribute, a ComplementaryGesture. In this instance, the SignLanguage may include a name of a language as an attribute.
The AvatarPersonality may include, as an element, at least one of an openness, a conscientiousness, an extraversion, an agreeableness, and a neuroticism, and may selectively include a name of a personality.
The AvatarControlFeatures may include, as elements, ControlBodyFeatures that is a set of elements controlling moves of a body and ControlFaceFeatures that is a set of elements controlling moves of a face, and may selectively include a name of a control configuration as an attribute.
The ControlBodyFeatures may include, as an element, at least one of headBones, UpperBodyBones, DownBodyBones, and MiddleBodyBones. In this instance, the ControlFaceFeatures may include, as an element, at least one of a HeadOutline, a LeftEyeOutline, a RightEyeOutline, a LeftEyeBrowOutline, a RightEyeBrowOutline, a LeftEarOutline, a RightEarOutline, a NoseOutline, a MouthLipOutline, FacePoints, and MiscellaneousPoints, and may selectively include, as an attribute, a name of a face control configuration. In this instance, at least one of elements included in the ControlFaceFeatures may include, as an element, at least one of an Outline4Points having four points, an Outline5Points having five points, and an Outline8Points having eight points, and an Outline14Points having fourteen points. Also, at least one of elements included in the ControlFaceFeatures may include a basic number of points and may selectively further include an additional point.
The object information may include information associated with a virtual object. Information associated with the virtual object may include, as metadata for expressing a virtual object of the virtual environment, at least one element of a VOAppearance, a virtual VOAnimation, and VOCC.
When at least one link to an appearance file exists, the VOAppearance may include, as an element, a VirtualObjectURL that is an element including the at least one link.
The VOAnimation may include, as an element, at least one of a VOMotion, a VODeformation, and a VOAdditionalAnimation, and may include, as an attribute, at least one of an AnimationID, a Duration that is a length of time where an animation lasts, and a Loop that is a playing option.
The above avatar information may refer to descriptions made above with reference to
The avatar characteristic controlling system 5100 may generate avatar control information that is used to control characteristics of the users to be mapped onto the avatar of the virtual world based on the avatar information and the sensor control command. The sensor control command may be generated by sensing facial expressions and body motions of the users of the real world. The avatar characteristic controlling system 5100 may directly manipulate the avatar based on the avatar control information, or may transmit the avatar control information to a separate system of manipulating the avatar. When the avatar characteristic controlling system 5100 directly manipulates the avatar, the avatar characteristic controlling method may further include operation 5230.
In operation 5230, the avatar characteristic controlling system 5100 may manipulate the avatar of the virtual world based on the avatar control information. As described above, the avatar control information may be used to control characteristics of the users to be mapped onto the avatar of the virtual world. Therefore, the avatar characteristic controlling system 5100 may manipulate the user intent of the real world to be adapted to the avatar of the virtual world based on the avatar control information.
As described above, when employing an avatar characteristic controlling system or an avatar characteristic controlling method according to an embodiment, it is possible to effectively control characteristics of an avatar in a virtual world. In addition, it is possible to generate a random expression indefinable in an animation by setting feature points for sensing a user face in a real world, and by generating a face of the avatar in the virtual world based on data collected in association with the feature points.
Referring to
The CI may be commands based on values input through the real world device or information relating to the commands. The CI may include sensory input device capabilities (SIDC), user sensory input preferences (USIP), and sensory input device commands (SDICmd).
An adaptation real world to virtual world (hereinafter, referred to as ‘adaptation RV’) may be implemented by a real world to virtual world engine (hereinafter, referred to as ‘RV engine’). The adaptation RV may convert real world information input using the real world device to information to be applicable in the virtual world, using the CI about motion, status, intent, feature, and the like of the user of the real world included in the sensor signal. The above described adaptation process may affect virtual world information (hereinafter, referred to as ‘VWI’).
The VWI may be information associated with the virtual world. For example, the VWI may be information associated with elements constituting the virtual world, such as a virtual object or an avatar. A change with respect to the VWI may be performed in the RV engine through commands of a virtual world effect metadata (VWEM) type, a virtual world preference (VWP) type, and a virtual world capability type.
Table 85 describes configurations described in
Referring to
Also, referring to
A section 5518 may signify a definition of a base element of the avatar control commands 5410. The avatar control commands 5410 may semantically signify commands for controlling an avatar.
A section 5520 may signify a definition of a root element of the avatar control commands 5410. The avatar control commands 5410 may indicate a function of the root element for metadata.
Sections 5519 and 5521 may signify a definition of the avatar control command base type 5411. The avatar control command base type 5411 may extend an avatar control command base type (AvatarCtrlCmdBasetype), and provide a base abstract type for a subset of types defined as part of the avatar control commands metadata types.
The any attributes 5412 may be an additional avatar control command.
According to an embodiment, the avatar control command base type 5411 may include avatar control command base attributes 5413 and any attributes 5414.
A section 5515 may signify a definition of the avatar control command base attributes 5413. The avatar control command base attributes 5413 may be instructions to display a group of attribute for the commands.
The avatar control command base attributes 5413 may include ‘id’, ‘idref’, ‘activate’, and ‘value’.
‘id’ may be identifier (ID) information for identifying individual identities of the avatar control command base type 5411.
‘idref’ may refer to elements that have an instantiated attribute of type id. ‘idref’ may be additional information with respect to ‘id’ for identifying the individual identities of the avatar control command base type 5411.
‘activate’ may signify whether an effect shall be activated. ‘true’ may indicate that the effect is activated, and ‘false’ may indicate that the effect is not activated. As for section 5516, ‘activate’ may have data of a “boolean” type, and may be optionally used.
‘value’ may describe an intensity of the effect in percentage according to a max scale defined within a semantic definition of individual effects. As for section 5517, ‘value’ may have data of “integer” type, and may be optionally used.
The any attributes 5414 may be instructions to provide an extension mechanism for including attributes from another namespace being different from target namespace. The included attributes may be XML streaming commands defined in ISO/IEC 21000-7 for the purpose of identifying process units and associating time information of the process units. For example, ‘si:pts’ may indicate a point in which the associated information is used in an application for processing.
A section 5622 may indicate a definition of an avatar control command appearance type.
According to an embodiment, the avatar control command appearance type may include an appearance control type, an animation control type, a communication skill control type, a personality control type, and a control control type.
A section 5623 may indicate an element of the appearance control type. The appearance control type may be a tool for expressing appearance control commands. Hereinafter, a structure of the appearance control type will be described in detail with reference to
Referring to
According to an embodiment, the elements of the appearance control type 5910 may include body, head, eyes, nose, lip, skin, face, nail, hair, eyebrows, facial hair, appearance resources, physical condition, clothes, shoes, and accessories.
Referring again to
Referring to
According to an embodiment, the elements of the communication skill control type 6010 may include input verbal communication, input nonverbal communication, output verbal communication, and output nonverbal communication.
Referring again to
Referring to
According to an embodiment, the elements of the personality control type 6110 may include openness, agreeableness, neuroticism, extraversion, and conscientiousness.
Referring again to
Referring to
According to an embodiment, the any attributes 6230 may include a motion priority 6231 and a speed 6232.
The motion priority 6231 may determine a priority when generating motions of an avatar by mixing animation and body and/or facial feature control.
The speed 6232 may adjust a speed of an animation. For example, in a case of an animation concerning a walking motion, the walking motion may be classified into a slowly walking motion, a moderately waling motion, and a quickly walking motion according to a walking speed.
The elements of the animation control type 6210 may include idle, greeting, dancing, walking, moving, fighting, hearing, smoking, congratulations, common actions, specific actions, facial expression, body expression, and animation resources.
Referring again to
Referring to
According to an embodiment, the any attributes 6330 may include a motion priority 6331, a frame time 6332, a number of frames 6333, and a frame ID 6334.
The motion priority 6331 may determine a priority when generating motions of an avatar by mixing an animation with body and/or facial feature control.
The frame time 6332 may define a frame interval of motion control data. For example, the frame interval may be a second unit.
The number of frames 6333 may optionally define a total number of frames for motion control.
The frame ID 6334 may indicate an order of each frame.
The elements of the control control type 6310 may include a body feature control 6340 and a face feature control 6350.
According to an embodiment, the body feature control 6340 may include a body feature control type. Also, the body feature control type may include elements of head bones, upper body bones, lower body bones, and middle body bones.
Motions of an avatar of a virtual world may be associated with the animation control type and the control control type. The animation control type may include information associated with an order of an animation set, and the control control type may include information associated with motion sensing. To control the motions of the avatar of the virtual world, an animation or a motion sensing device may be used. Accordingly, an imaging apparatus of controlling the motions of the avatar of the virtual world according to an embodiment will be herein described in detail.
Referring to
The storage unit 6410 may include an animation clip, animation control information, and control control information. In this instance, the animation control information may include information indicating a part of an avatar the animation clip corresponds to and a priority. The control control information may include information indicating a part of an avatar motion data corresponds to and a priority. In this instance, the motion data may be generated by processing a value received from a motion sensor.
The animation clip may be moving picture data with respect to the motions of the avatar of the virtual world.
According to an embodiment, the avatar of the virtual world may be divided into each part, and the animation clip and motion data corresponding to each part may be stored. According to embodiments, the avatar of the virtual world may be divided into a facial expression, a head, an upper body, a middle body, and a lower body, which will be described in detail with reference to
Referring to
According to an embodiment, the animation clip and the motion data may be data corresponding to any one of the facial expression 6510, the head 6520, the upper body 6530, the middle body 6540, and the lower body 6550.
Referring again to
According to embodiments, the information indicating the part of the avatar the animation clip corresponds to may be information indicating any one of the facial expression, the head, the upper body, the middle body, and the lower body.
The animation clip corresponding to an arbitrary part of the avatar may have the priority. The priority may be determined by a user in the real world in advance, or may be determined by real-time input. The priority will be further described with reference to
According to embodiments, the animation control information may further include information associated with a speed of the animation clip corresponding to the arbitrary part of the avatar. For example, in a case of data indicating a walking motion as the animation clip corresponding to the lower body of the avatar, the animation clip may be divided into slowly walking motion data, moderately walking motion data, quickly walking motion data, and jumping motion data.
The control control information may include the information indicating the part of the avatar the motion data corresponds to and the priority. In this instance, the motion data may be generated by processing the value received from the motion sensor.
The motion sensor may be a sensor of a real world device for measuring motions, expressions, states, and the like of a user in the real world.
The motion data may be data in which a value obtained by measuring the motions, the expressions, the states, and the like of the user of the real world may be received, and the received value is processed to be applicable in the avatar of the virtual world.
For example, the motion sensor may measure position information with respect to arms and legs of the user of the real world, and may be expressed as ΘXreal, ΘYreal, and ΘZreal, that is, values of angles with a x-axis, a y-axis, and a z-axis, and also expressed as Xreal, Yreal, and Zreal, that is, values of the x-axis, the y-axis, and the z-axis. Also, the motion data may be data processed to enable the values about the position information to be applicable in the avatar of the virtual world.
According to an embodiment, the avatar of the virtual world may be divided into each part, and the motion data corresponding to each part may be stored. According to embodiments, the motion data may be information indicating any one of the facial expression, the head, the upper body, the middle body, and the lower body of the avatar.
The motion data corresponding to an arbitrary part of the avatar may have the priority. The priority may be determined by the user of the real world in advance, or may be determined by real-time input. The priority of the motion data will be further described with reference to
The processing unit 6420 may compare the priority of the animation control information corresponding to a first part of an avatar and the priority of the control control information corresponding to the first part of the avatar to thereby determine data to be applicable in the first part of the avatar, which will be described in detail with reference to
Referring to
The animation clip 6610 may be a category of data with respect to motions of an avatar corresponding to an arbitrary part of an avatar of a virtual world. According to embodiments, the animation clip 6610 may be a category with respect to the animation clip corresponding to any one of a facial expression, a head, an upper body, a middle body, and a lower body of an avatar. For example, a first animation clip 6611 may be the animation clip corresponding to the facial expression of the avatar, and may be data concerning a smiling motion. A second animation clip 6612 may be the animation clip corresponding to the head of the avatar, and may be data concerning a motion of shaking the head from side to side. A third animation clip 6613 may be the animation clip corresponding to the upper body of the avatar, and may be data concerning a motion of raising arms up. A fourth animation clip 6614 may be the animation clip corresponding to the middle part of the avatar, and may be data concerning a motion of sticking out a butt. A fifth animation clip 6615 may be the animation clip corresponding to the lower part of the avatar, and may be data concerning a motion of bending one leg and stretching the other leg forward.
The corresponding part 6620 may be a category of data indicating a part of an avatar the animation clip corresponds to. According to embodiments, the corresponding part 6620 may be a category of data indicating any one of the facial expression, the head, the upper body, the middle body, and the lower body of the avatar which the animation clip corresponds to. For example, the first animation clip 6611 may be an animation clip corresponding to the facial expression of the avatar, and a first corresponding part 6621 may be expressed as ‘facial expression’. The second animation clip 6612 may be an animation clip corresponding to the head of the avatar, and a second corresponding part 6622 may be expressed as ‘head’. The third animation clip 6613 may be an animation clip corresponding to the upper body of the avatar, and a third corresponding part 6623 may be expressed as ‘upper body’. The fourth animation clip 6614 may be an animation clip corresponding to the middle body of the avatar, and a fourth corresponding part may be expressed as ‘middle body’. The fifth animation clip 6615 may be an animation clip corresponding to the lower body of the avatar, and a fifth corresponding part 6625 may be expressed as ‘lower body’.
The priority 6630 may be a category of values with respect to the priority of the animation clip. According to embodiments, the priority 6630 may be a category of values with respect to the priority of the animation clip corresponding to any one of the facial expression, the head, the upper body, the middle body, and the lower body of the avatar. For example, the first animation clip 6611 corresponding to the facial expression of the avatar may have a priority value of ‘5’. The second animation clip 6612 corresponding to the head of the avatar may have a priority value of ‘2’. The third animation clip 6613 corresponding to the upper body of the avatar may have a priority value of ‘5’. The fourth animation clip 6614 corresponding to the middle body of the avatar may have a priority value of ‘1’. The fifth animation clip 6615 corresponding to the lower body of the avatar may have a priority value of ‘1’. The priority value with respect to the animation clip may be determined by a user in the real world in advance, or may be determined by a real-time input.
Referring to
The motion data 6710 may be data obtained by processing values received from a motion sensor, and may be a category of the motion data corresponding to an arbitrary part of an avatar of a virtual world. According to embodiments, the motion data 6710 may be a category of the motion data corresponding to any one of a facial expression, a head, an upper body, a middle body, and a lower body of the avatar. For example, first motion data 6711 may be motion data corresponding to the facial expression of the avatar, and may be data concerning a grimacing motion of a user in the real world. In this instance, the data concerning the grimacing motion may be obtained such that the grimacing motion of the user of the real world is measured by the motion sensor, and the measured value is applicable in the facial expression of the avatar. Similarly, second motion data 6712 may be motion data corresponding to the head of the avatar, and may be data concerning a motion of lowering a head of the user of the real world. Third motion data 6713 may be motion data corresponding to the upper body of the avatar, and may be data concerning a motion of lifting arms of the user of the real world from side to side. Fourth motion data 6714 may be motion data corresponding to the middle body of the avatar, and may be data concerning a motion of shaking a butt of the user of the real world back and forth. Fifth motion data 6715 may be motion data corresponding to the lower part of the avatar, and may be data concerning a motion of spreading both legs of the user of the real world from side to side while bending.
The corresponding part 6720 may be a category of data indicating a part of an avatar the motion data corresponds to. According to embodiments, the corresponding part 6720 may be a category of data indicating any one of the facial expression, the head, the upper body, the middle body, and the lower body of the avatar that the motion data corresponds to. For example, since the first motion data 6711 is motion data corresponding to the facial expression of the avatar, a first corresponding part 6721 may be expressed as ‘facial expression’. Since the second motion data 6712 is motion data corresponding to the head of the avatar, a second corresponding part 6722 may be expressed as ‘head’. Since the third motion data 6713 is motion data corresponding to the upper body of the avatar, a third corresponding part 6723 may be expressed as ‘upper body’. Since the fourth motion data 6714 is motion data corresponding to the middle body of the avatar, a fourth corresponding part 6724 may be expressed as ‘middle body’. Since the fifth motion data 6715 is motion data corresponding to the lower body of the avatar, a fifth corresponding part 6725 may be expressed as ‘lower body’.
The priority 6730 may be a category of values with respect to the priority of the motion data. According to embodiments, the priority 6730 may be a category of values with respect to the priority of the motion data corresponding to any one of the facial expression, the head, the upper body, the middle body, and the lower body of the avatar. For example, the first motion data 6711 corresponding to the facial expression may have a priority value of ‘1’. The second motion data 6712 corresponding to the head may have a priority value of ‘5’. The third motion data 6713 corresponding to the upper body may have a priority value of ‘2’. The fourth motion data 6714 corresponding to the middle body may have a priority value of ‘5’. The fifth motion data 6715 corresponding to the lower body may have a priority value of ‘5’. The priority value with respect to the motion data may be determined by the user of the real world in advance, or may be determined by a real-time input.
Referring to
Motion object data may be data concerning motions of an arbitrary part of an avatar. The motion object data may include an animation clip and motion data. The motion object data may be obtained by processing values received from a motion sensor, or by being read from the storage unit of the imaging apparatus. According to embodiments, the motion object data may correspond to any one of a facial expression, a head, an upper body, a middle body, and a lower body of the avatar.
A database 6820 may be a database with respect to the animation clip. Also, the database 6830 may be a database with respect to the motion data.
The processing unit of the imaging apparatus according to an embodiment may compare a priority of animation control information corresponding to a first part of the avatar 6810 with a priority of control control information corresponding to the first part of the avatar 6810 to thereby determine data to be applicable in the first part of the avatar.
According to embodiments, a first animation clip 6821 corresponding to the facial expression 6811 of the avatar 6810 may have a priority value of ‘5’, and first motion data 6831 corresponding to the facial expression 6811 may have a priority value of ‘1’. Since the priority of the first animation clip 6821 is higher than the priority of the first motion data 6831, the processing unit may determine the first animation clip 6821 as the data to be applicable in the facial expression 6811.
Also, a second animation clip 6822 corresponding to the head 6812 may have a priority value of ‘2’, and second motion data 6832 corresponding to the head 6812 may have a priority value of ‘5’. Since, the priority of the second motion data 6832 is higher than the priority of the second animation clip 6822, the processing unit may determine the second motion data 6832 as the data to be applicable in the head 6812.
Also, a third animation clip 6823 corresponding to the upper body 6813 may have a priority value of ‘5’, and third motion data 6833 corresponding to the upper body 6813 may have a priority value of ‘2’. Since the priority of the third animation clip 6823 is higher than the priority of the third motion data 6833, the processing unit may determine the third animation clip 6823 as the data to be applicable in the upper body 6813.
Also, a fourth animation clip 6824 corresponding to the middle body 6814 may have a priority value of ‘1’, and fourth motion data 6834 corresponding to the middle body 6814 may have a priority value of ‘5’. Since the priority of the fourth motion data 6834 is higher than the priority of the fourth animation clip 6824, the processing unit may determine the fourth motion data 6834 as the data to be applicable in the middle body 6814.
Also, a fifth animation clip 6825 corresponding to the lower body 6815 may have a priority value of ‘1’, and fifth motion data 6835 corresponding to the lower body 6815 may have a priority value of ‘5’. Since the priority of the fifth motion data 6835 is higher than the priority of the fifth animation clip 6825, the processing unit may determine the fifth motion data 6835 as the data to be applicable in the lower body 6815.
Accordingly, as for the avatar 6810, the facial expression 6811 may have the first animation clip 6821, the head 6812 may have the second motion data 6832, the upper body 6813 may have the third animation clip 6823, the middle body 6814 may have the fourth motion data 6834, and the lower body 6815 may have the fifth motion data 6835.
Data corresponding to an arbitrary part of the avatar 6810 may have a plurality of animation clips and a plurality of pieces of motion data. When a plurality of pieces of the data corresponding to the arbitrary part of the avatar 6810 is present, a method of determining data to be applicable in the arbitrary part of the avatar 6810 will be described in detail with reference to
Referring to
When the motion object data corresponding to a first part of the avatar is absent, the imaging apparatus may determine new motion object data obtained by being newly read or by being newly processed, as data to be applicable in the first part.
In operation 6920, when the motion object data corresponding to the first part is present, the processing unit may compare a priority of an existing motion object data and a priority of the new motion object data.
In operation 6930, when the priority of the new motion object data is higher than the priority of the existing motion object data, the imaging apparatus may determine the new motion object data as the data to be applicable in the first part of the avatar.
However, when the priority of the existing motion object data is higher than the priority of the new motion object data, the imaging apparatus may determine the existing motion object data as the data to be applicable in the first part.
In operation 6940, the imaging apparatus may determine whether all motion object data is determined.
When the motion object data not being verified is present, the imaging apparatus may repeatedly perform operations S6910 to S6940 with respect to the all motion object data not being determined.
In operation 6950, when the all motion object data are determined, the imaging apparatus may associate data having a highest priority from the motion object data corresponding to each part of the avatar to thereby generate a moving picture of the avatar.
The processing unit of the imaging apparatus according to an embodiment may compare a priority of animation control information corresponding to each part of the avatar with a priority of control control information corresponding to each part of the avatar to thereby determine data to be applicable in each part of the avatar, and may associate the determined data to thereby generate a moving picture of the avatar. A process of determining the data to be applicable in each part of the avatar has been described in detail in
Referring to
In operation 7020, the imaging apparatus may extract information associated with a connection axis from motion object data corresponding to the part of the avatar. The motion object data may include an animation clip and motion data. The motion object data may include information associated with the connection axis.
In operation 7030, the imaging apparatus may verify whether motion object data not being associated is present.
When the motion object data not being associated is absent, since all pieces of data corresponding to each part of the avatar are associated, the process of generating the moving picture of the avatar will be terminated.
In operation 7040, when the motion object data not being associated is present, the imaging apparatus may change, to a relative direction angle, a joint direction angle included in the connection axis extracted from the motion object data. According to embodiments, the joint direction angle included in the information associated with the connection axis may be the relative direction angle. In this case, the imaging apparatus may advance operation 7050 while omitting operation 7040.
Hereinafter, according to an embodiment, when the joint direction angle is an absolute direction angle, a method of changing the joint direction angle to the relative direction angle will be described in detail. Also, in a case where an avatar of a virtual world is divided into a facial expression, a head, an upper body, a middle body, and a lower body will be described herein in detail.
According to embodiments, motion object data corresponding to the middle body of the avatar may include body center coordinates. The joint direction angle of the absolute direction angle may be changed to the relative direction angle based on a connection portion of the middle part including the body center coordinates.
The imaging apparatus may extract the information associated with the connection axis stored in the motion object data corresponding to the middle part of the avatar. The information associated with the connection axis may include a joint direction angle between a thoracic vertebrae corresponding to a connection portion of the upper body of the avatar with a cervical vertebrae corresponding to a connection portion of the head, a joint direction angle between the thoracic vertebrae and a left clavicle, a joint direction angle between the thoracic vertebrae and a right clavicle, a joint direction angle between a pelvis corresponding to a connection portion of the middle part and a left femur corresponding to a connection portion of the lower body, and a joint direction angle between the pelvis and the right femur.
For example, the joint direction angle between the pelvis and the right femur may be expressed as the following Equation 1.
A(θRightFemur)=RRightFemur
where a function A(.) denotes a direction cosine matrix, RRightFemur_Pelvis denotes a rotational matrix with respect to the direction angle between the pelvis and the right femur, ΘRightFemur denotes a joint direction angle in the right femur of the lower body of the avatar, and ΘPelvis denotes a joint direction angle between the pelvis and the right femur.
Using Equation 1, a rotational function may be calculated as illustrated in the following Equation 2.
RRightFemur
The joint direction angle of the absolute direction angle may be changed to the relative direction angle based on the connection portion of the middle body of the avatar including the body center coordinates. For example, using the rotational function of Equation 2, a joint direction angle, that is, an absolute direction angle included in information associated with a connection axis, which is stored in the motion object data corresponding to the lower body of the avatar, may be changed to a relative direction angle as illustrated in the following Equation 3.
A(θ′)=RRightFemur
Similarly, a joint direction angle, that is, an absolute direction angle included in information associated with a connection axis, which is stored in the motion object data corresponding to the head and upper body of the avatar, may be changed to a relative direction angle.
Through the above described method of changing the joint direction angle to the relative direction angle, when the joint direction angle is changed to the relative direction angle, using information associated with the connection axis stored in motion object data corresponding to each part of the avatar, the imaging apparatus may associate the motion object data corresponding to each part of the avatar in operation 7050.
The imaging apparatus may return to operation 7030, and may verify whether the motion object data not being associated is present.
When the motion object data not being associated is absent, since all pieces of data corresponding to each part of the avatar are associated, the process of generating the moving picture of the avatar will be terminated.
Referring to
The motion object data 7110 corresponding to the first part may be any one of an animation clip and motion data. Similarly, the motion object data 7120 corresponding to the second part may be any one of an animation clip and motion data.
According to an embodiment, the storage unit of the imaging apparatus may further store information associated with a connection axis 7101 of the animation clip, and the processing unit may associate the animation clip and the motion data based on the information associated with the connection axis 7101. Also, the processing unit may associate the animation clip and another animation clip based on the information associated with the connection axis 7101 of the animation clip.
According to embodiments, the processing unit may extract the information associated with the connection axis from the motion data, and enable the connection axis 7101 of the animation clip and a connection axis of the motion data to correspond to each to thereby associate the animation clip and the motion data. Also, the processing unit may associate the motion data and another motion data based on the information associated with the connection axis extracted from the motion data. The information associated with the connection axis was described in detail in
Hereinafter, an example of the imaging apparatus adapting a face of a user in a real world onto a face of an avatar of a virtual world will be described.
The imaging apparatus may sense the face of the user of the real world using a real world device, for example, an image sensor, and adapt the sensed face onto the face of the avatar of the virtual world. When the avatar of the virtual world is divided into a facial expression, a head, an upper body, a middle body, and a lower body, the imaging apparatus may sense the face of the user of the real world to thereby adapt the sensed face of the real world onto the facial expression and the head of the avatar of the virtual world.
According to embodiments, the imaging apparatus may sense feature points of the face of the user of the real world to collect data about the feature points, and may generate the face of the avatar of the virtual world using the data about the feature points.
As described above, when an imaging apparatus according to an embodiment is used, animation control information used for controlling an avatar of a virtual world and control metadata with respect to a structure of motion data may be provided. A motion of the avatar in which an animation clip corresponding to a part of the avatar of the virtual world is associated with motion data obtained by sensing a motion of a user of a real world may be generated by comparing a priority of the animation clip with a priority of the motion data, and by determining data corresponding the part of the avatar.
Referring to
The virtual world server 7230 may receive the regularized control command from the terminal 7210. In this example, a virtual world engine 7231 included in the virtual world server 7230 may generate information associated with a virtual world object by converting the regularized control command according to the virtual world object corresponding to the regularized control command. The virtual world server 7230 may transmit again information associated with the virtual world object to the terminal 7210 (7232). The virtual world object may include an avatar and a virtual object. In this example, in the virtual world object, the avatar may indicate an object in which a user appearance is reflected, and the virtual object may indicate a remaining object excluding the avatar.
The terminal 7210 may control the virtual world object based on information associated with the virtual world object. For example, the terminal 7210 may control the virtual world object by generating the control command based on information associated with the virtual world object, and by transmitting the control command to a display 7240 (7213). That is, the display 7240 may display information associated with the virtual world based on the transmitted control command (7213).
Even though the adaptation engine 7211 included in the terminal 7210 generates the regularized control command based on information 7221 received from the real world device 7220 in the aforementioned embodiment, it is only an example. According to another embodiment, the terminal 7210 may directly transmit the received information 7221 to the virtual world server 7230 without directly generating the regularized control command. Alternatively, the terminal 7210 may perform only regularizing of the received information 7221 and then may transmit the received information 7221 to the virtual world server 7230 (7212). For example, the terminal 7210 may transmit the received information 7221 to the virtual world server 7230 by converting the control input to be suitable for the virtual world and by regularizing the sensor input. In this example, the virtual world server 7230 may generate information associated with the virtual world object by generating the regularized control command based on the transmitted information 7212, and by converting the regularized control command according to the virtual world object corresponding to the regularized control command. The virtual world server 7230 may transmit information associated with the generated virtual world object to the terminal 7210 (7232). That is, the virtual world server 7230 may process all of processes of generating information associated with the virtual world object based on information 7221 received from the real world device 7220.
The virtual world server 7230 may be employed so that content processed in each of a plurality of terminals may be played back alike in a display of each of the terminals, through communication with the plurality of terminals.
Compared to the terminal 7210, the terminal 7310 may further include a virtual world engine 7312. That is, instead of communicating with the virtual world server 7230, described with reference to
A first terminal 7410 may receive information from a real world device 7420, and may generate information associated with the virtual world object based on information received from an adaptation engine 7411 and a virtual world engine 7412. Also, the first terminal 7410 may control the virtual world object by generating a control command based on information associated with the virtual world object and by transmitting the control command to a first display 7430.
A second terminal 7440 may also receive information from a real world device 7450, and may generate information associated with the virtual world object based on information received from an adaptation engine 7441 and a virtual world engine 7442. Also, the second terminal 7440 may control the virtual world object by generating a control command based on information associated with the virtual world object and by transmitting the control command to a second display 7460.
In this example, the first terminal 7410 and the second terminal 7440 may exchange information associated with the virtual world object between the virtual world engines 7412 and 7442 (7470). For example, when a plurality of users controls an avatar in a single virtual world, information associated with the virtual world object may need to be exchanged between the first terminal 7410 and the second terminal 7420 (7470) so that content processed in each of the first terminal 7410 and the second terminal 7420 may be applied alike to the single virtual world.
Even though only two terminals are described for ease of description in the embodiment of
The terminal 7510 may communicate with a virtual world server 7530 and further include a virtual world sub-engine 7512. That is, an adaptation engine 7511 included in the terminal 7510 may generate a regularized control command based on information received from a real world device 7520, and may generate information associated with the virtual world object based on the regularized control command. In this example, the terminal 7510 may control the virtual world object based on information associated with the virtual world object. That is, the terminal 7510 may control the virtual world object by generating a control command based on information associated with the virtual world object and by transmitting the control command to a display 7540. In this example, the terminal 7510 may receive virtual world information from the virtual world server 7530, generate the control command based on virtual world information and information associated with the virtual world object, and transmit the control command to the display 7540 to display overall information of the virtual world. For example, avatar information may be used in the virtual world by the terminal 7510 and thus, the virtual world server 7530 may transmit only virtual world information, for example, information associated with the virtual object or another avatar, required by the terminal 7510.
In this example, the terminal 7510 may transmit, to the virtual world server 7530, the processing result that is obtained according to control of the virtual world object, and the virtual world server 7530 may update the virtual world information based on the processing result. That is, since the virtual world server 7530 updates virtual world information based on the processing result of the terminal 7510, virtual world information in which the processing result is used may be provided to other terminals. The virtual world server 7530 may process the virtual world information using a virtual world engine 7531.
The methods according to the above-described example embodiments may be recorded in non-transitory computer-readable media including program instructions to implement various operations embodied by a computer. The media may also include, alone or in combination with the program instructions, data files, data structures, and the like. The program instructions recorded on the media may be those specially designed and constructed for the purposes of the example embodiments, or they may be of the kind well-known and available to those having skill in the computer software arts. Examples of non-transitory computer-readable media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks and DVDs; magneto-optical media such as optical discs; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like.
Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter. The described hardware devices may be configured to act as one or more software modules in order to perform the operations of the above-described example embodiments, or vice versa. Any one or more of the software modules described herein may be executed by a dedicated processor unique to that unit or by a processor common to one or more of the modules. The described methods may be executed on a general purpose computer or processor or may be executed on a particular machine such as the image processing apparatus described herein.
For example, a metadata structure defining an avatar face feature point and a body feature point for controlling a facial expression and a motion of an avatar may be recorded in a non-transitory computer-readable storage medium. In this instance, at least one of a HeadOutline, a LeftEyeOutline, a RightEyeOutline, a LeftEyeBrowOutline, a RightEyeBrowOutline, a LeftEarOutline, a RightEarOutline, a NoseOutline, a MouthLipOutline, FacePoints, and MiscellaneousPoints may be represented based on the avatar face feature point. A non-transitory computer-readable storage medium according to another embodiment may include a first set of instructions to store animation control information and control control information, and a second set of instructions to associate an animation clip and motion data generated from a value received from a motion sensor, based on the animation control information corresponding to each part of an avatar and the control control information. The animation control information and the control control information are described above.
Although a few embodiments have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the disclosure, the scope of which is defined by the claims and their equivalents.
Claims
1. An object controlling system, comprising:
- a control command receiver to receive a control command with respect to an object of a virtual environment; and
- an object controller to control the object based on the received control command and object information of the object.
2. The object controlling system of claim 1, wherein:
- the object information comprises common characteristics of a virtual world object, and
- the common characteristics comprises, as metadata, at least one element of an Identification for identifying the virtual world object, a virtual world object sound (VWOSound), a virtual world object scent (VWOScent), a virtual world object control (VWOControl), a virtual world object event (VWOEvent), a virtual world object behavior model (VWOBehaviorModel), and virtual world object haptic properties (VWOHapticProperties).
3. The object controlling system of claim 2, wherein the Identification comprises, as an element, at least one of a user identifier (UserID) for identifying a user associated with the virtual world object, an Ownership of the virtual world object, Rights, and Credits, and comprises, as an attribute, at least one of a name of the virtual world object and a family with another virtual world object.
4. The object controlling system of claim 2, wherein:
- the VWOSound comprises, as an element, a sound resource uniform resource locator (URL) including at least one link to a sound file, and comprises, as an attribute, at least one of a sound identifier (SoundID) that is a unique identifier of an object sound, an intensity indicating a sound strength, a duration indicating a length of time where the sound lasts, a loop indicating a playing option, and a sound name.
5. The object controlling system of claim 2, wherein:
- the VWOScent comprises, as an element, a scent resource URL including at least one link to a scent file, and comprises, as an attribute, at least one of a scent identifier (ScentID) that is a unique identifier of an object scent, an intensity indicating a scent strength, a duration indicating a length of time where the scent lasts, a loop indicating a playing option, and a scent name.
6. The object controlling system of claim 2, wherein:
- the VWOControl comprises, as an element, a motion feature control (MotionFeatureControl) that is a set of elements controlling a position, an orientation, and a scale of the virtual world object, and comprises, as an attribute, a control identifier (ControllD) that is a unique identifier of control.
7. The object controlling system of claim 6, wherein:
- the MotionFeatureControl comprises, as an element, at least one of a position of an object in a scene with a three-dimensional (3D) floating point vector, an orientation of the object in a scene with the 3D floating point vector as an Euler angle, and a scale of the object in a scene expressed as the 3D floating point vector.
8. The object controlling system of claim 2, wherein:
- the VWOEvent comprises, as an element, at least one of a Mouse that is a set of mouse event elements, a Keyboard that is a set of keyboard event elements, and a user defined input (UserDefinedInput), and comprises, as an attribute, an event identifier (EventID) that is a unique identifier of an event.
9. The object controlling system of claim 8, wherein:
- the Mouse comprises, as an element, at least one of a click, double click (Double_Click), a left button down (LeftBttn_down) that is an event taking place at the moment of holding down a left button of a mouse, a left button up (LeftBttn_up) that is an event taking place at the moment of releasing the left button of the mouse, a right button down (RightBttn_down) that is an event taking place at the moment of pushing a right button of the mouse, a right button up (RightBttn_up) that is an event taking place at the moment of releasing the right button of the mouse, and a move that is an event taking place while changing a position of the mouse.
10. The object controlling system of claim 8, wherein:
- the Keyboard comprises, as an element, at least one of a key down (Key_Down) that is an event taking place at the moment of holding down a keyboard button and a key up (Key_Up) that is an event taking place at the moment of releasing the keyboard button.
11. The object controlling system of claim 2, wherein:
- the VWOBehaviorModel comprises, as an element, at least one of a behavior input (BehaviorInput) that is an input event for generating an object behavior and a behavior output (BehaviorOutput) that is an object behavior output according to the input event.
12. The object controlling system of claim 11, wherein:
- the BehaviorInput comprises an EventID as an attribute, and
- the BehaviorOutput comprises, as an attribute, at least one of a SoundID, a ScentID, and an animation identifier (AnimationID).
13. The object controlling system of claim 2, wherein:
- the VWOHapticProperties comprises, as an attribute, at least one of a material property (MaterialProperty) that contains parameters characterizing haptic properties, a dynamic force effect (DynamicForceEffect) that contains parameters characterizing force effects, and a tactile property (TactileProperty) that contains parameters characterizing tactile properties.
14. The object controlling system of claim 13, wherein:
- the MaterialProperty comprises, as an attribute, at least one of a Stiffness of the virtual world object, a static friction (StaticFriction) of the virtual world object, a dynamic friction (DynamicFriction) of the virtual world object, a Damping of the virtual world object, a Texture containing a link to a haptic texture file, and a mass of the virtual world object.
15. The object controlling system of claim 13, wherein:
- the DynamicForceEffect comprises, as an attribute, at least one of a force field (ForceField) containing a link to a force field vector file and a movement trajectory (MovementTrajectory) containing a link to a force trajectory file.
16. The object controlling system of claim 13, wherein:
- the TactileProperty comprises, as an attribute, at least one of a Temperature of the virtual world object, a Vibration of the virtual world object, a Current of the virtual world object, and tactile patterns (TactilePatterns) containing a link to a tactile pattern file.
17. The object controlling system of claim 1, wherein:
- the object information comprises avatar information associated with an avatar of a virtual world, and
- the avatar information comprises, as the metadata, at least one element of an avatar appearance (AvatarAppearance), an avatar animation (AvatarAnimation), avatar communication skills (AvatarCommunicationSkills), an avatar personality (AvatarPersonality), avatar control features (AvatarControlFeatures), and avatar common characteristics (AvatarCC), and comprises, as an attribute, a Gender of the avatar.
18. The object controlling system of claim 17, wherein:
- the AvatarAppearance comprises, as an element, at least one of a Body, a Head, Eyes, Ears, a Nose, a mouth lip (MouthLip), a Skin, a facial, a Nail, a body look (BodyLook), a Hair, eye brows (EyeBrows), a facial hair (FacialHair), facial calibration points (FacialCalibrationPoints), a physical condition (PhysicalCondition), Clothes, Shoes, Accessories, and an appearance resource (AppearanceResource).
19. The object controlling system of claim 18, wherein:
- the PhysicalCondition comprises, as an element, at least one of a body strength (BodyStrength) and a body flexibility (BodyFlexibility).
20. The object controlling system of claim 17, wherein:
- the AvatarAnimation comprises at least one element of an Idle, a Greeting, a Dance, a Walk, a Moves, a Fighting, a Hearing, a Smoke, Congratulations, common action (Common_Actions), specific actions (Specific_Actions), a facial expression (Facial_Expression), a body expression (Body_Expression), and an animation resource (AnimationResource).
21. The object controlling system of claim 17, wherein:
- the AvatarCommunicationSkills comprises, as an element, at least one of an input verbal communication (InputVerbalCommunication), an input nonverbal communication (InputNonVerbalCommunication), an output verbal communication (OutputVerbalCommunication), and an output nonverbal communication (OutputNonVerbalCommunication), and comprises, as an attribute, at least one of a Name and a default language (DefaultLanguage).
22. The object controlling system of claim 21, wherein:
- a verbal communication comprising the InputVerbalCommunication and OutputVerbalCommunication comprises a language as the element, and comprises, as the attribute, at least one of a voice, a text, and the language.
23. The object controlling system of claim 22, wherein:
- the language comprises, as an attribute, at least one of a name that is a character string indicating a name of the language and a preference for using the language in the verbal communication.
24. The object controlling system of claim 23, wherein a communication preference including the preference comprises a preference level of a communication of the avatar.
25. The object controlling system of claim 22, wherein the language is set with a communication preference level (CommunicationPreferenceLevel) including a preference level for each language that the avatar is able to speak or understand.
26. The object controlling system of claim 21, wherein a nonverbal communication comprising the InputNonVerbalCommunication and the OutputNonVerbalCommunication comprises, as an element, at least one of a sign language (SignLanguage) and a cued speech communication (CuedSpeechCommumication), and comprises, as an attribute, a complementary gesture (ComplementaryGesture).
27. The object controlling system of claim 26, wherein the SignLanguage comprises a name of a language as an attribute.
28. The object controlling system of claim 17, wherein the AvatarPersonality comprises, as an element, at least one of an openness, a conscientiousness, an extraversion, an agreeableness, and a neuroticism, and selectively comprises a name of a personality.
29. The object controlling system of claim 17, wherein the AvatarControlFeatures comprises, as elements, control body features (ControlBodyFeatures) that is a set of elements controlling moves of a body and control face features (ControlFaceFeature) that is a set of elements controlling moves of a face, and selectively comprises a name of a control configuration as an attribute.
30. The object controlling system of claim 29, wherein the ControlBodyFeatures comprises, as an element, at least one of head bones (headBones), upper body bones (UpperBodyBones), down body bones (DownBodyBones), and middle body bones (MiddleBodyBones).
31. The object controlling system of claim 29, wherein the ControlFaceFeatures comprises, as an element, at least one of a head outline (HeadOutline), a left eye outline (LeftEyeOutline), a right eye outline (RightEyeOutline), a left eye brow outline (LeftEyeBrowOutline), a right eye brow outline (RightEyeBrowOutline), a left ear outline (LeftEarOutline), a right ear outline (RightEarOutline), a nose outline (NoseOutline), a mouth lip outline (MouthLipOutline), face points (FacePoints), and miscellaneous points (MiscellaneousPoints), and selectively comprises, as an attribute, a name of a face control configuration.
32. The object controlling system of claim 31, wherein at least one of elements comprised in the ControlFaceFeatures comprises, as an element, at least one of an outline (Outline4Points) having four points, an outline (Outline5Points) having five points, and an outline (Outline8Points) having eight points, and an outline (Outline14Points) having fourteen points.
33. The object controlling system of claim 31, wherein at least one of elements comprised in the ControlFaceFeatures comprises a basic number of points and selectively further comprises an additional point.
34. The object controlling system of claim 1, wherein:
- the object information comprises information associated with a virtual object, and
- information associated with the virtual object comprises, as metadata for expressing a virtual object of the virtual environment, at least one element of a virtual object appearance (VOAppearance), a virtual object animation (VOAnimation), and virtual object common characteristics (VOCC).
35. The object controlling system of claim 34, wherein when at least one link to an appearance file exists, the VOAppearance comprises, as an element, a virtual object URL (VirtualObjectURL) that is an element including the at least one link.
36. The object controlling system of claim 34, wherein the VOAnimation comprises, as an element, at least one of a virtual object motion (VOMotion), a virtual object deformation (VODeformation), and a virtual object additional animation (VOAdditionalAnimation), and comprises, as an attribute, at least one of an animation identifier (AnimationID), a Duration that is a length of time where an animation lasts, and a Loop that is a playing option.
37. The object controlling system of claim 1, wherein when the object is an avatar, the object controller controls the avatar based on the received control command and metadata defining an avatar face feature point and a body feature point for controlling a facial expression and a motion of the avatar.
38. The object controlling system of claim 1, wherein:
- when the object is an avatar of a virtual world, the control command is generated by sensing a facial expression and a body motion of a user of a real world, and
- the object controller controls the object to map characteristics of the user to the avatar of the virtual world according to the facial expression and the body motion.
39. An object controlling system, comprising:
- a controller to control a virtual world object of a virtual world using a real world device,
- wherein the virtual world object comprises an avatar and a virtual object, and comprises, as metadata, common characteristics of the avatar and the virtual object, and
- the common characteristics comprises at least one element of an Identification for identifying the virtual world object, a virtual world object sound (VWOSound), a virtual world object scent (VWOScent), a virtual world object control (VWOControl), a virtual world object event (VWOEvent), a virtual world object behavior model (VWOBehaviorModel), and virtual world object haptic properties (VWOHapticProperties).
40. An object controlling system, comprising:
- a controller to control a virtual world object of a virtual world using a real world device,
- wherein the virtual world object comprises an avatar and a virtual object, and comprises avatar information associated with the avatar, and
- the avatar information comprises at least one element of an avatar appearance (AvatarAppearance), an avatar animation (AvatarAnimation), avatar communication skills (AvatarCommunicationSkills), an avatar personality (AvatarPersonality), avatar control features (AvatarControlFeatures), and avatar common characteristics (AvatarCC), and comprises, as an attribute, a Gender of the avatar.
41. An object controlling system, comprising:
- a controller to control a virtual world object of a virtual world using a real world device,
- wherein the virtual world object comprises an avatar and a virtual object, and comprises, as metadata for expressing the virtual object of a virtual environment, information associated with the virtual object and
- information associated with the virtual object comprises at least one element of a virtual object appearance (VOAppearance), a virtual object animation (VOAnimation), and virtual object common characteristics (VOCC).
42. An object controlling system, comprising:
- a control command generator to generate a regularized control command based on information received from a real world device;
- a control command transmitter to transmit the regularized control command to a virtual world server; and
- an object controller to control a virtual world object based on information associated with the virtual world object received from the virtual world server.
43. An object controlling system, comprising:
- an information generator to generate information associated with a corresponding virtual world object by converting a regularized control command received from a terminal according to the virtual world object; and
- an information transmitter to transmit information associated with the virtual world object to the terminal,
- wherein the regularized control command is generated based on information received by the terminal from a real world device.
44. An object controlling system, comprising:
- an information transmitter to transmit, to a virtual world server, information received from a real world device; and
- an object controller to control a virtual world object based on information associated with the virtual world object that is received from the virtual world server according to the transmitted information.
45. An object controlling system, comprising:
- a control command generator to generate a regularized control command based on information received from a terminal;
- an information generator to generate information associated with a corresponding virtual world object by converting the regularized control command according to the virtual world object; and
- an information transmitter to transmit information associated with the virtual world object to the terminal,
- wherein the received information comprises information received by the terminal from a real world device.
46. An object controlling system, comprising:
- a control command generator to generate a regularized control command based on information received from a real world device;
- an information generator to generate information associated with a corresponding virtual world object by converting the regularized control command according to the virtual world object; and
- an object controller to control the virtual world object based on information associated with the virtual world object.
47. An object controlling system, comprising:
- a control command generator to generate a regularized control command based on information received from a real world device;
- an information generator to generate information associated with a corresponding virtual world object by converting the regularized control command according to the virtual world object;
- an information exchanging unit to exchange information associated with the virtual world object with information associated with a virtual world object of another object controlling system; and
- an object controller to control the virtual world object based on information associated with the virtual world object and the exchanged information associated with the virtual world object of the other virtual world object.
48. An object controlling system, comprising:
- an information generator to generate information associated with a virtual world object based on information received from a real world device and virtual world information received from a virtual world server;
- an object controller to control the virtual world object based on information associated with the virtual world object; and
- a processing result transmitter to transmit, to the virtual world server, a processing result according to controlling of the virtual world object.
49. An object controlling system, comprising:
- an information transmitter to transmit virtual world information to a terminal; and
- an information update unit to update the virtual world information based on a processing result received from the terminal,
- wherein the processing result comprises a control result of a virtual world object based on information received by the terminal from a real world device, and the virtual world information.
50. The object controlling system of claim 42, wherein the object controller controls the virtual world object by generating a control command based on information associated with the virtual world object and transmitting the generated control command to a display.
51. A method of controlling an object in an object controlling system, the method comprising:
- receiving a control command with respect to an object of a virtual environment; and controlling the object based on the received control command and object information of the object.
52. The method of claim 51, wherein:
- the object information comprises common characteristics of a virtual world object, and
- the common characteristics comprises, as metadata, at least one element of an identification for identifying the virtual world object, a virtual world object sound (VWOSound), a virtual world object scent (VWOScent), a virtual world object control (VWOControl), a virtual world object event (VWOEvent), a virtual world object behavior model (VWOBehaviorModel), and virtual world object haptic properties (VWOHapticProperties).
53. The method of claim 51, wherein:
- the object information comprises avatar information associated with an avatar of a virtual world, and
- the avatar information comprises, as the metadata, at least one element of an avatar appearance (AvatarAppearance), an avatar animation (AvatarAnimation), Avatar communication skills (AvatarCommunicationSkills), an avatar personality (AvatarPersonality), avatar control features (AvatarControlFeatures), and avatar common characteristics (AvatarCC), and comprises, as an attribute, a Gender of the avatar.
54. The method of claim 51, wherein:
- the object information comprises information associated with a virtual object, and
- information associated with the virtual object comprises, as metadata for expressing a virtual object of the virtual environment, at least one element of a virtual object appearance (VOAppearance), a virtual object animation (VOAnimation), and virtual object common characteristics (VOCC).
55. The method of claim 51, wherein the controlling comprises controlling the avatar based on the received control command and metadata defining an avatar face feature point and a body feature point for controlling a facial expression and a motion of an avatar when the object is the avatar.
56. The method of claim 51, wherein:
- when the object is an avatar of a virtual world, the control command is generated by sensing a facial expression and a body motion of a user of a real world, and
- the controlling comprises controlling the object to map characteristics of the user to the avatar of the virtual world according to the facial expression and the body motion.
57. An object controlling method, comprising:
- controlling a virtual world object of a virtual world using a real world device,
- wherein the virtual world object comprises an avatar and a virtual object, and comprises, as metadata, common characteristics of the avatar and the virtual object, and
- the common characteristics comprises at least one element of an Identification for identifying the virtual world object, a virtual world object sound (VWOSound), a virtual world object scent (VWOScent), a virtual world object control (VWOControl), a virtual world object event (VWOEvent), a virtual world object behavior model (VWOBehaviorModel), and virtual world object haptic properties (VWOHapticProperties).
58. An object controlling method, comprising:
- controlling a virtual world object of a virtual world using a real world device,
- wherein the virtual world object comprises an avatar and a virtual object, and comprises avatar information associated with the avatar, and
- the avatar information comprises at least one element of an avatar appearance (AvatarAppearance), an avatar animation (AvatarAnimation), avatar communication skills (AvatarCommunicationSkills), an avatar personality (AvatarPersonality), avatar control features (AvatarControlFeatures), and avatar common characteristics (AvatarCC), and comprises, as an attribute, a Gender of the avatar.
59. An object controlling method, comprising:
- controlling a virtual world object of a virtual world using a real world device,
- wherein the virtual world object comprises an avatar and a virtual object, and comprises, as metadata for expressing the virtual object of a virtual environment, information associated with the virtual object and
- information associated with the virtual object comprises at least one element of a virtual object appearance (VOAppearance), a virtual object animation (VOAnimation), and virtual object common characteristics (VOCC).
60. An object controlling method, comprising:
- generating a regularized control command based on information received from a real world device;
- transmitting the regularized control command to a virtual world server; and
- controlling a virtual world object based on information associated with the virtual world object received from the virtual world server.
61. An object controlling method, comprising:
- generating information associated with a corresponding virtual world object by converting a regularized control command received from a terminal according to the virtual world object; and
- transmitting information associated with the virtual world object to the terminal,
- wherein the regularized control command is generated based on information received by the terminal from a real world device.
62. An object controlling method, comprising:
- transmitting, to a virtual world server, information received from a real world device; and
- controlling a virtual world object based on information associated with the virtual world object that is received from the virtual world server according to the transmitted information.
63. An object controlling method, comprising:
- generating a regularized control command based on information received from a terminal;
- generating information associated with a corresponding virtual world object by converting the regularized control command according to the virtual world object; and
- transmitting information associated with the virtual world object to the terminal,
- wherein the received information comprises information received by the terminal from a real world device.
64. An object controlling method, comprising:
- generating a regularized control command based on information received from a real world device;
- generating information associated with a corresponding virtual world object by converting the regularized control command according to the virtual world object; and
- controlling the virtual world object based on information associated with the virtual world object.
65. An object controlling method, comprising:
- generating a regularized control command based on information received from a real world device;
- generating information associated with a corresponding virtual world object by converting the regularized control command according to the virtual world object;
- exchanging information associated with the virtual world object with information associated with a virtual world object of another object controlling system; and
- controlling the virtual world object based on information associated with the virtual world object and the exchanged information associated with the virtual world object of the other virtual world object.
66. An object controlling method, comprising:
- generating information associated with a virtual world object based on information received from a real world device and virtual world information received from a virtual world server;
- controlling the virtual world object based on information associated with the virtual world object; and
- transmitting, to the virtual world server, a processing result according to controlling of the virtual world object.
67. An object controlling method, comprising:
- transmitting virtual world information to a terminal; and
- updating the virtual world information based on a processing result received from the terminal,
- wherein the processing result comprises a control result of a virtual world object based on information received by the terminal from a real world device, and the virtual world information.
68. The object controlling method according to any one of claim 60, 62, or 64 through 66, wherein the controlling of the virtual world object comprises controlling the virtual world object by generating a control command based on information associated with the virtual world object and transmitting the generated control command to a display.
69. A non-transitory computer-readable storage medium storing a program to implement the method according to any one of claims 51 through 68.
70. A non-transitory computer-readable storage medium storing a metadata structure, wherein an avatar face feature and a body feature point for controlling a facial expression and a motion of an avatar are defined.
71. The non-transitory computer-readable storage medium of claim 70, wherein at least one of a head outline (HeadOutline), a left eye outline (LeftEyeOutline), a right eye outline (RightEyeOutline), a left eye brow outline (LeftEyeBrowOutline), a right eye brow outline (RightEyeBrowOutline), a left ear outline (LeftEarOutline), a right ear outline (RightEarOutline), a nose outline (NoseOutline), a lip outline (MouthLipOutline), face points (FacePoints), and miscellaneous points (MiscellaneousPoints) is expressed based on the avatar face feature point.
72. An imaging apparatus comprising:
- a storage unit to store an animation clip, animation control information, and control control information, the animation control information including information indicating a part of an avatar the animation clip corresponds to and a priority, and the control control information including information indicating a part of an avatar motion data corresponds to and a priority, the motion data being generated by processing a value received from a motion sensor; and
- a processing unit to compare a priority of animation control information corresponding to a first part of the avatar with a priority of control control information corresponding to the first part of the avatar, and to determine data to be applicable to the first part of the avatar.
73. The imaging apparatus of claim 72, wherein the processing unit compares the priority of the animation control information corresponding to each part of the avatar with the priority of the control control information corresponding to each part of the avatar, to determine data to be applicable to each part of the avatar, and associates the determined data to generate a motion picture of the avatar.
74. The imaging apparatus of claim 72, wherein:
- information associated with a part of an avatar that each of the animation clip and the motion data corresponds to is information indicating that each of the animation clip and the motion data corresponds to one of a facial expression, a head, an upper body, a middle body, and a lower body of the avatar.
75. The imaging apparatus of claim 72, wherein the animation control information further comprises information associated with a speed of an animation of the avatar.
76. The imaging apparatus of claim 72, wherein:
- the storage unit further stores information associated with a connection axis of the animation clip, and
- the processing unit associates the animation clip with the motion data based on information associated with the connection axis of the animation clip.
77. The imaging apparatus of claim 76, wherein the processing unit extracts information associated with a connection axis from the motion data, and associates the animation clip and the motion data by enabling the connection axis of the animation clip to correspond to the connection axis of the motion data.
78. A non-transitory computer-readable storage medium storing a program implemented in a computer system comprising a processor and a memory, the non-transitory computer-readable storage medium comprising:
- a first set of instructions to store animation control information and control control information; and
- a second set of instructions to associate an animation clip and motion data generated from a value received from a motion sensor, based on the animation control information corresponding to each part of an avatar and the control control information,
- wherein the animation control information comprises information associated with a corresponding animation clip, and an identifier indicating the corresponding animation clip corresponds to one of a facial expression, a head, an upper body, a middle body, and a lower body of an avatar, and
- the control control information comprises an identifier indicating real-time motion data corresponds to one of the facial expression, the head, the upper body, the middle body, and the lower body of an avatar.
79. The non-transitory computer-readable storage medium of claim 78, wherein:
- the animation control information further comprises a priority, and
- the control control information further comprises a priority.
80. The non-transitory computer-readable storage medium of claim 79, wherein the second set of instructions compares a priority of animation control information corresponding to a first part of an avatar with a priority of control control information corresponding to the first part of the avatar, to determine data to be applicable to the first part of the avatar.
81. The non-transitory computer-readable storage medium of claim 78, wherein the animation control information further comprises information associated with a speed of an animation of the avatar.
82. The non-transitory computer-readable storage medium of claim 78, wherein the second set of instructions extracts information associated with a connection axis from the motion data, and associates the animation clip and the motion data by enabling the connection axis of the animation clip to correspond to the connection axis of the motion data.
83. An object controlling system, comprising:
- a control command receiver to receive a control command with respect to an object of a virtual environment; and
- an object controller to control the object based on the received control command and object information of the object, the object information comprising:
- common characteristics of a virtual world object comprising, as metadata, at least one element of an Identification for identifying the virtual world object, a virtual world object sound (VWOSound), a virtual world object scent (VWOScent), a virtual world object control (VWOControl), a virtual world object event (VWOEvent), a virtual world object behavior model (VWOBehaviorModel), and virtual world object haptic properties (VWOHapticProperties); and
- avatar information associated with an avatar of a virtual world comprising, as metadata, at least one element of an avatar appearance (AvatarAppearance), an avatar animation (AvatarAnimation), avatar communication skills (AvatarCommunicationSkills), an avatar personality (AvatarPersonality), avatar control features (AvatarControlFeatures), and avatar common characteristics (AvatarCC), and comprises, as an attribute, a Gender of the avatar.
84. The object controlling system of claim 83, wherein:
- the object information comprises information associated with a virtual object, and
- information associated with the virtual object comprises, as metadata for expressing a virtual object of the virtual environment, at least one element of a virtual object appearance (VOAppearance), a virtual object animation (VOAnimation), and virtual object common characteristics (VOCC).
85. The object controlling system of claim 83, wherein when the object is an avatar, the object controller controls the avatar based on the received control command and metadata defining an avatar face feature point and a body feature point for controlling a facial expression and a motion of the avatar.
86. The object controlling system of claim 83, wherein:
- when the object is an avatar of a virtual world, the control command is generated by sensing a facial expression and a body motion of a user of a real world, and
- the object controller controls the object to map characteristics of the user to the avatar of the virtual world according to the facial expression and the body motion.
Type: Application
Filed: May 8, 2010
Publication Date: Feb 14, 2013
Applicant: SAMSUNG ELECTRONICS CO., LTD. (Suwon-si)
Inventors: Seung Ju Han (Yongin-si), Jae Joon Han (Yongin-si), Jeong Hwan Ahn (Yongin-si), Hyun Jeong Lee (Yongin-si), Wong Chul Bang (Yongin-si), Joon Ah Park (Yongin-si)
Application Number: 13/319,456
International Classification: G06T 13/40 (20110101); G06F 3/033 (20060101); G06T 15/00 (20110101); G06F 3/01 (20060101);