AUGMENTED REALITY SYSTEM INDEXED IN THREE DIMENSIONS

A portable computerized device is configured to display a three dimensional instruction graphic. The device includes a camera device capturing an image, wherein the image includes location data related to a token. The portable computerized device is configured to display the three dimensional instruction graphic based upon the location data.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This disclosure claims the benefit of U.S. Provisional Application No. 61/701,039 filed on Sep. 14, 2012 and U.S. Provisional Application No. 61/752,010 filed on Jan. 14, 2013 which are hereby incorporated by reference.

FIELD OF THE DISCLOSURE

The present disclosure relates generally to an augmented or virtual reality system indexed in three dimensions. In particular, examples of the present disclosure are related to use of an augmented reality system indexed in three dimensions presenting a virtual instructor.

BACKGROUND

The statements in this section merely provide background information related to the present disclosure. Accordingly, such statements are not intended to constitute an admission of prior art.

Augmented reality includes methods wherein computerized images are displayed to augment a view or experience in the real world. In one example, a computer generated image is presented upon or superimposed over a series of images captured by a camera associated with the view screen. Virtual reality, on the other hand, includes images in an entirely separate environment, with all elements upon the display being computer generated. Examples of augmented reality displays includes HUD displays, lines projected on a television image of a football field showing a first down mark on the field, and a glow projected on a television image of a hockey puck.

Smart-phones, tablet computers, and other similar portable computerized devices utilize camera devices to interact with their environment. In one exemplary embodiment, a smart-phone can capture an image of a quick response code (QR code) and an instruction can be provided to the phone based upon the image. In one example, the phone can be instructed to access a particular webpage over a communications network based upon the information provided by the QR code. Similarly a two-dimensional barcode can be used by a portable computerized device to provide an input to the device. A handheld unit in a store can permit a user to enter items onto a gift registry operated by the store based upon a scanned input.

SUMMARY

A portable computerized device is configured to display a three dimensional instruction graphic. The device includes a camera device capturing an image, wherein the image includes location data related to a token. The portable computerized device is configured to display the three dimensional instruction graphic based upon the location data.

BRIEF DESCRIPTION OF THE DRAWINGS

Non-limiting and non-exhaustive embodiments of the present disclosure are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified.

FIG. 1 is an illustration of a portable computerized device including a camera feature reading information from a token placed upon the ground, in accordance with the present disclosure;

FIG. 2 is an illustration of a plurality of positions from which portable computerized devices can be located and view a token, in accordance with the present disclosure;

FIG. 3 is an illustration of an exemplary yoga mat utilized as a token, with two exemplary portable computerized devices illustrating a virtual yoga instructor with an orientation based upon the location of portable computerized device with respect to the token, in accordance with the present disclosure;

FIG. 4 illustrates operation of an exemplary three dimensional model instruction program operating a first aid program, in accordance with the present disclosure;

FIG. 5 illustrates operation of an exemplary three dimensional model instruction program operating a martial arts program, in accordance with the present disclosure;

FIG. 6 illustrates an exemplary three dimensional model instruction program illustrating instructions to install a cable to a computer, in accordance with the present disclosure;

FIG. 7 is a schematic illustrating an exemplary portable computerized device in communication with an exemplary three dimensional model instruction server, in accordance with the present disclosure;

FIG. 8 is a schematic illustrating an exemplary three dimensional model instruction server, in accordance with the present disclosure; and

FIG. 9 is a schematic illustrating an exemplary portable computerized device configured to implement processes disclosed herein, in accordance with the present disclosure.

Corresponding reference characters indicate corresponding components throughout the several views of the drawings. Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments of the present disclosure. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments of the present disclosure.

DETAILED DESCRIPTION

In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one having ordinary skill in the art that the specific detail need not be employed to practice the present disclosure. In other instances, well-known materials or methods have not been described in detail in order to avoid obscuring the present disclosure

Reference throughout this specification to “one embodiment”, “an embodiment”, “one example” or “an example” means that a particular feature, structure or characteristic described in connection with the embodiment or example is included in at least one embodiment of the present disclosure. Thus, appearances of the phrases “in one embodiment”, “in an embodiment”, “one example” or “an example” in various places throughout this specification are not necessarily all referring to the same embodiment or example. Furthermore, the particular features, structures or characteristics may be combined in any suitable combinations and/or sub-combinations in one or more embodiments or examples. In addition, it is appreciated that the figures provided herewith are for explanation purposes to persons ordinarily skilled in the art and that the drawings are not necessarily drawn to scale.

Computerized devices and servers available over the Internet operate three dimensional computer models. Such models are known in the art and will not be described in detail herein. Such models can be manipulated to generate a changing display, e.g. changing a perspective upon the three dimensionally modeled object, by an input that is used to determine a point of view for the generated display. For example, a program can be created showing a three dimensional model of a car, and a user can manipulate a point of view by providing an input to a slider graphic. By moving a button with an exemplary mouse device, the graphic representation of the motor vehicle can be rotated in a horizontal plane through 360 degrees based upon the user's input to the mouse device. Similarly, the vehicle can be rotated to view a top of a bottom of the car through a second slider or, for example, by monitoring motion of the mouse device in both X and Y axes of input.

Augmented reality and virtual reality programs can monitor a captured image and use information from the captured image to input information. Image recognition software and related programming is known in the art and will not be described in detail herein. According to one embodiment, a spatial relationship of a known object can be determined based upon how the object appears in a captured image. Such a known object can be defined as a token for use by an augmented or virtual reality program. For example, if a one dollar bill, a well known pattern that can be programmed for identification within a computerized program, is laid upon a table and a program is utilized to analyze the dimensions of the image of the dollar bill, a determination can be made of a distance and an orientation of the dollar bill to the device capturing the image. This distance and/or orientation of the known object or token in a captured image can be used by an augmented or virtual reality program as an input for manipulating a three dimensional model.

Augmented or virtual reality can be used to help a user improve their experience or acquire more detailed information about a topic. For example, a program can be operated on a computerized device to assist a user in learning about a skill or activity. A process for displaying a three dimensional computer generated image of an instructor upon a portable computerized device is disclosed.

Some users of portable computerized devices utilize such devices to research and gather information on activities that they are doing. As used herein, the term “portable computerized device” can refer to a number of computerized devices, such as smart-phones, laptop computers, tablet computers, and eye glasses configured with a processor and capable of displaying graphics within a view of the user. A user may wish to utilize a portable computerized device to watch a demonstration of or instruction on an activity the user wishes to learn. Such a demonstration would be enhanced by the ability of the user to utilize a portable computerized device to view the demonstration or instruction as it would appear in the user's actual environment through an augmented reality system.

A three dimensional model of an instructor can be displayed upon a portable computerized device, permitting a user to view the instruction from a number of different points of view. A wide variety of instruction topics can be programmed for display. In one exemplary embodiment, a yoga instruction program can be operated, whereby a yoga position or a series of yoga exercises can be displayed to the user. The user can change a point of view for the program, enabling the user to see from a number of different angles what movements the instructor is exemplifying. A progression of yoga exercises can be made available, with varying levels of difficulty ranging from novice to expert, such that the program can be used and marketed as a comprehensive training program. In one embodiment, a virtual reality program can be provided, whereby the user can manipulate a point of view for the instructor though a slider input displayed upon a screen of a smart-phone device. In another embodiment, a yoga mat with a token imprinted thereupon can be provided, and an augmented reality program can utilize an image captured of the yoga mat as a token to manipulate a point of view of the displayed yoga instructor. In one embodiment, an image of a token captured by a camera device can be utilized as a first input to determine a point of view for a three dimensional program, and a user input to the device can be utilized as a second input to determine the point of view. For example, a user viewing a token-oriented model from a front facing point of view can tap an option displayed upon the screen of an exemplary tablet device, and the displayed model can be changed to a left facing point of view.

In another exemplary embodiment, a first aid instruction program can be operated. For example, an instructor can be displayed going through sequential instructions on how to perform cardiopulmonary resuscitation (CPR). In such an instance, an instructor and a virtual patient can be displayed for view by the user, with an audio message describing at each step of the procedure important aspects of the displayed procedure. An exemplary user can pause the displayed sequence during chest compressions and slowly repeat the display through a chest compression, while changing point of view as necessary, to visualize proper hand placement and compression depth. Other options can be changed by the user as desired, for example, changing an age, gender, or medical condition of the virtual patient. In one example, a CPR mannequin for use by the user can be utilized as a token for an augmented reality program, such that the user can examine the instructor in place over the mannequin performing CPR just prior to the user actually practicing on the mannequin. In one embodiment, a sensor or sensors monitoring movement of the user can be monitored, and a subsequent display can be used to compare monitored movement to the movement of the modeled instructor. Such sensors used to monitor movement of a person are known in the art and will not be described in detail herein. Other first aid procedures that can be displayed include proper use of a defibrillator, proper application of a splint to a broken limb, and addressing a cut.

In another exemplary embodiment, a martial arts instruction program can be operated. In one exemplary program, an actual instructor, using a first portable computerized device, can control parameters of a program displayed in an augmented reality program superimposed upon a workout mat, wherein a token is imprinted or otherwise placed upon the mat, and a plurality of students can view operation of the program under the control of the instructor from various points of view controlled according to inputs disclosed herein. A pair of virtual participants can be displayed interacting with each other.

A number of other exemplary instruction programs can be operated. For example, a program providing instruction on how to assemble components of a computer can be operated, enabling a user to quickly unpack a new computer, make the required cable connections, and take advantage of various features of the computer. A program can be operated to show a tennis player a correct form or various popular forms for hitting a back-hand shot. A program can be operated to show how to chip a golf ball onto a green. A program can be operated to show a number of different sport techniques, including but not limited to a soccer kick; a football throw or kick; a baseball swinging, throwing, or catching technique; a jump shot or dribble in basketball; a swimming technique; a water rescue technique; a hockey technique; a figure skating routine; proper technique for lifting weights; an aerobics workout routine; downhill or cross country skiing; snowboarding or snow blades; roller blading; water skiing; parachuting technique; boxing; table tennis; water polo; lacrosse; wrestling; archery; target shooting; and rock climbing. A program can be operated to show automotive repair techniques, e.g., how to change oil for a particular model and year of car. A program can be operated to show a popular dance technique. A program can be operated to show to a class of students a technique used for a medical, dental, or surgical procedure. A program can be operated to show a student proper technique in playing a guitar. In such an embodiment, a virtual guitar player and guitar can be displayed. In another example, a virtual violin, saxophone, or piano can be displayed. In another embodiment, only a guitar with a pair of virtual hands located to the guitar can be displayed. In one embodiment intended to instruct children, an animated cartoon character can be displayed. A number of exemplary uses and instruction programs are envisioned, and the disclosure is not intended to be limited to the exemplary embodiments disclosed herein.

A token can be a two dimensional image printed upon a flat object. A token can be a three dimensional object or images printed upon a three dimensional object. A token can be a simple design, for example, printable upon a single sheet of paper. A token can be imprinted upon a decorative object. In an exemplary embodiment including a yoga mat, the mat can include a token for operating a three dimensional model instruction program as disclosed herein. Such an exemplary mat can further include exemplary images showing a user a number of yoga positions. An object printed with a token can be a sellable object, for example, wherein information on the object can enable a user to download and/or initiate a corresponding instruction program. In one exemplary embodiment, a token recognized by a user's device can act similarly to a QR code, automatically instructing the device to go to a particular webpage whereat an executable program can be executed or downloaded. End user license information for a program can be contained upon a printed object also acting as a token.

In accordance with various embodiments of the present disclosure, techniques are disclosed for presenting instructions or demonstrations of activities utilizing an augmented or virtual reality system indexed in three dimensions which allows a user to view an instruction or demonstration as it would appear in the user's actual environment. Instructions, characters, avatars, cartoon graphics, and other graphic displays are embodiments of a three dimensional instruction graphic as disclosed herein.

Referring now to the drawings, wherein the showings are for the purpose of illustrating certain exemplary embodiments only and not for the purpose of limiting the same, FIG. 1 illustrates a portable computerized device including a camera feature reading information from a token placed upon a surface. Portable computerized device 10 includes view-screen 15. An exemplary portable computerized device 10 further includes a processor, RAM memory, and storage memory in the form of a hard drive, flash memory, or other similar storage devices known in the art. Portable computerized device 10 can further be connected to a wireless communication network through cellular connection, wireless LAN connection, or other communication methods known in the art. Portable computerized device further includes software, such as an operating system and applications configured to monitor inputs, for example, in the form of touch inputs to a touch screen device, inputs to the camera device, and audio inputs and control outputs, for example, in the form of graphics, sounds, and communication signals over the wireless connection. View-screen 15 can include a touch screen input. Other exemplary devices use button inputs, trackball and button inputs, eye focus location sensors, or other methods known in the art. The camera device of portable computerized device 10 can include a lens and optical sensor according to digital cameras or smart-phones known in the art capable of capturing a visual image and translating it into a stored digital representation of the image. View 30 of the camera is illustrated. Any number of portable computerized devices can be utilized according to the methods disclosed herein, and the disclosure is not intended to be limited to the particular examples provided.

Token 20 is some graphic design, symbol, word, or pattern 25 that can be identified by a portable computerized device 10 through its camera device input. An application upon the portable computerized device 10 interprets images captured by the camera device and identifies the token within the image. Based upon the size and orientation of the token and the graphics or symbols thereupon in the image, a spatial relationship of the token to the image can be determined.

Graphical images displayed upon view-screen 15 can be based upon a three-dimensional model. Such models are well known in the art and can include simple geometric shapes or more complex models, for example, modeling the human form.

The application can include programming to create a graphical image upon view-screen 15 based upon manipulation of a model associated with token 20. Token 20 provides a reference point at which to anchor the graphical image and the orientation of the model upon which the image is based. According to one embodiment, the graphic based upon the model can be displayed upon the view-screen 15. In another embodiment, the image or a stream of images from the camera device can be displayed upon the view-screen 15, and the graphic based upon the model can, with the orientation based upon the sensed token, be superimposed over the images from the camera. In one embodiment, the graphic based upon the model can be located to partially or fully cover the token in the image.

FIG. 2 illustrates a plurality of positions from which portable computerized devices can be located and view a token. Three portable computerized devices 110A, 110B, and 110C, are illustrated located at different locations with respect to token 100, and the camera devices of the portable computerized devices include respective views 115A, 115B, and 115C. Based upon identifying the token 100 and the graphic 105 thereupon, applications running upon each of portable computerized devices 110A, 110B, 110C can interpret a location of the token with respect to the portable computerized device and manipulate a point of view of a programmed model associated with the token to represent a virtual object or character oriented based upon how the portable computerized device is located with respect to the token. Graphic 105 is illustrated as an arrow showing a graphic that can clearly indicate a direction in which the token is oriented. Further, based upon perspective of the graphic to the viewer or the portable computerized device viewing the graphic, an inclination of the token with respect to the portable computerized device can be determined. Further, based upon a size of the graphic within the image, a distance of the token from the portable computerized device viewing the graphic can be determined. The graphic is illustrated as an arrow for clarity of example, but any graphic with distinguishable orientation can be utilized upon token 100. Vivid colors and bright contrast in the graphic can be used to aid identification of the token and the graphic thereupon by the portable computerized device.

FIG. 3 illustrates an exemplary yoga mat utilized as a token, with two exemplary portable computerized devices illustrating a virtual yoga instructor with an orientation based upon the location of portable computerized device with respect to the token. Yoga mat 200 includes graphics 205 which indicate an orientation of the token 200. Illustrative graphics 206 can also be included upon the mat, in addition to the graphics needed to provide orientation and identification of the token or as part of the graphics providing orientation and identification of the token. According to one embodiment, each of the illustrative graphics can include a border with a particular shape, such as a pentagon, which aid in a portable computerized device quickly and robustly identifying and tracking the graphic upon the token. Portable computerized devices 210A and 210B are illustrated with different locations and orientations with respect to token 200. Portable computerized device 210A illustrates a virtual yoga instructor 220A located upon an image of token 215A represented upon the view screen as image 215A. Portable computerized device 210B illustrates a virtual yoga instructor 220B located upon an image of token 215B represented upon the view screen as image 215B. Based upon the different locations of the portable computerized devices and a common model operating upon both devices, the yoga instructor 220A on portable computerized device 210A is projected facing to the left, while the yoga instructor 220B on portable computerized device 210B is illustrated facing the viewer. Each of the portable computerized devices can be moved about the token 200 as indicated by arrow 225, and the orientation of the virtual instructor upon the view-screen will change with the changed location of the portable computerized device.

Portable computerized devices 210A and 210B are illustrated displaying the same yoga instructor based upon a same model with a same anchored orientation with respect to the token 200. However, it will be appreciated that the user of a particular portable computerized device can change any of a number of parameters. For example, a gender of the instructor illustrated can be changed, a size of the instructor graphic can be changed, a baseline rotation of the instructor can be changed based upon the preferences of a particular viewer. According to one embodiment, a number of quick select buttons can be presented along an edge of the view-screen of the portable computerized device for easy manipulation of the projected graphics. In one example, an orientation of the illustrated character can be toggled 180 degrees, so that the person watching the instructor can quickly change whether the front or the back of the character is being viewed. FIG. 3 illustrates a yoga instructor that can be viewed in three dimensions based upon an orientation of a token to the device of the viewer. Other embodiments are envisioned, such as martial art instruction or first aid instruction, and the disclosure is not intended to be limited to the particular examples provided herein.

FIG. 4 illustrates operation of an exemplary three dimensional model instruction program operating a first aid program. Device 310 includes display 320 and a camera device capturing view 315 including a token as disclosed herein. Based upon input received from view 315, device 310 displays a first virtual character 330 providing first aid to second virtual character 340. In the exemplary embodiment of FIG. 4, character 330 is applying a splint 335 to the arm of character 340. A number of first aid instructions are envisioned, and the disclosure is not intended to be limited to the exemplary embodiments disclosed herein.

FIG. 5 illustrates operation of an exemplary three dimensional model instruction program operating a martial arts program. Device 410 includes display 420. Character 430 representing an instructor is illustrated. Character 440 is illustrated showing user motions captured through a motion capture sensor known in the art. Device 410 can show the instructor and the user's motions in slow or pause motion, permitting the user to compare the graphics and learn from the comparison. Input graphics permitting a user to interact with a touch-screen device are illustrated, including button 450 prompting the user to play the graphic motions, button 452 prompting the user to pause the graphic motions, button 454 prompting the user to request to see the instructor go through the instruction again, and button 456 prompting the user to record another attempt at the instructed motion. In another embodiment, a virtual character executing a block of the illustrated motion could be displayed. A number of teaching methods and interactive controls are envisioned for use with the instructions disclosed herein, and the disclosure is not intended to be limited to the examples provided herein.

FIG. 6 illustrates an exemplary three dimensional model instruction program illustrating instructions to install a cable to a computer. Device 510 is illustrated including display 520 and a camera device capturing view 525. Laptop computer 530 is illustrated proximate to and within the view of device 510. In one exemplary embodiment, the model of computer 530 can be entered, and an image of computer 530 can be referenced in a database, such that computer 530 can act as a token for a program operated on device 510. Either a graphic representing computer 530 in a virtual reality program or an image of computer 530 can be displayed as graphic 540. Virtual hand 550 is illustrated inserting a printer cable in a particular port located upon the computer. A number of instructional programs showing a user how to accomplish physical tasks are envisioned, and the disclosure is not intended to be limited to the examples provided herein.

FIG. 7 is a schematic illustrating an exemplary portable computerized device in communication with an exemplary three dimensional model instruction server. Portable computerized device 610 is illustrated, including message 630 displayed upon a graphical user interface 620 of device 610. Device 610 can include a camera device, and an image or a series of images creating a video feed can be displayed including an object displaying a token image. Device 610 is an exemplary portable computerized device including input devices configured to gather information and a processor configured to make determinations regarding data from the input devices. Server 650 is illustrated including a remote computerized system with modules operating to process information gathered from device 610 and enable operation of a three dimensional model. Server 650 and device 610 are in communication through exemplary wireless communications network 640. Message 630 illustrates an embodiment whereby an image capturing a token initiates a sequence for downloading the instruction program to the device.

FIG. 8 is a schematic illustrating an exemplary three dimensional model instruction server. In the illustrated embodiment, the server 650 may include a processing device 720, a communication device 710, and memory device 730.

The processing device 720 can include memory, e.g., read only memory (ROM) and random access memory (RAM), storing processor-executable instructions and one or more processors that execute the processor-executable instructions. In embodiments where the processing device 720 includes two or more processors, the processors can operate in a parallel or distributed manner. In the illustrative embodiment, the processing device 720 executes one or more of a video input module 740, a 3D model rendering module 750, and an instruction module 760.

The communication device 710 is a device that allows the server 650 to communicate with another device, e.g., a portable computerized device 610 through a wireless communication network connection. The communication device 710 can include one or more wireless transceivers for performing wireless communication and/or one or more communication ports for performing wired communication.

The memory device 730 is a device that stores data generated or received by the server 650. The memory device 730 can include, but is not limited to a hard disc drive, an optical disc drive, and/or a flash memory drive. Further, the memory device 730 may be distributed and located at multiple locations. The memory device 730 is accessible to the processing device 720. In some embodiments, the memory device 730 includes a graphics database 780 and an instruction database 790.

Graphics database 780 can include files, libraries, and other tools facilitating operation of a 3D model. Instruction database 790 can include information enabling operation of an instruction program, for example, including data enabling operation of twenty yoga exercises.

The video input module 740 can monitor information provided by device 610 over network 640, for example, including a series of images showing a token within a view of the device. Module 740 can include programming to process the images, recognize the token within the images, determine a distance to and orientation of the token, and process the information as an input value or values to a 3D model.

Instruction module 760 includes programming to execute an instruction program, including operation of rules, routines, lesson plans, and other relevant information required to display an instruction program. Module 760 can access instruction database 790 to enable used of information stored on the database.

3D model rendering module 750 receives data from modules 740 and 760 and database 780, and module 750 provides graphics, images, or instructions enabling display of a three dimensional model instruction display upon device 610.

FIG. 9 is a schematic illustrating an exemplary portable computerized device configured to implement processes disclosed herein. Device 610 includes a processing device 810, a user interface 820, a communication device 860, a camera 830, and a memory device 840.

The processing device 810 can include memory, e.g., read only memory (ROM) and random access memory (RAM), storing processor-executable memory (ROM) and random access memory (RAM), storing processor-executable instructions and one or more processors that execute the processor-executable instructions. In embodiments where the processing device 810 includes two or more processors, the processors can operate in a parallel or distributed manner. In the illustrative embodiment, the processing device 810 can execute the operating system of the portable computerized device. In the illustrative embodiment, the processing device 810 also executes a video input module 850, a user input module 870, and graphics module 880, which are described in greater detail below.

The user interface 820 is a device that allows a user to interact with the portable computerized device. While one user interface 820 is shown, the term “user interface” can include, but is not limited to, a touch screen, a physical keyboard, a mouse, a microphone, and/or a speaker. The communication device 860 is a device that allows the portable computerized device to communicate with another device, e.g., server 650. The communication device 860 can include one or more wireless transceivers for performing wireless communication and/or one or more communication ports for performing wired communication. The memory device 840 is a device that stores data generated or received by the portable computerized device. The memory device 840 can include, but is not limited to, a hard disc drive, an optical disc drive, and/or a flash memory drive.

The camera 830 is a digital camera that captures a digital photograph. The camera 830 receives an instruction to capture an image and captures an image of a view proximate to the camera. The digital photograph can be a bitmap file. The bitmap file can be a bitmap, a JPEG, a GIF, or any other suitably formatted file. The camera 830 can receive the instruction to capture the image from the processing device 810 and can output the digital photograph to the processing device 810.

Video input module 850 monitors data from camera device 830, which can include an image of a token. User input module 870 can monitor input from the user related to manipulation of three dimensional model being operated. Graphics module 880 can receive data from server 650 and provide a display upon device 610 related to operation of the model and the related instruction program.

Embodiments in accordance with the present disclosure may be embodied as an device, process, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module” or “system.” Furthermore, the present disclosure may take the form of a computer program product embodied any tangible medium of expression having computer-usable program code embodied in the medium.

Any combination of one or more computer-usable or computer-readable media may be utilized. For example, a computer-readable medium may include one or more of a portable computer diskette, a hard disk, a random access memory (RAM) device, a read-only memory (ROM) device, an erasable programmable read-only memory (EPROM or Flash memory) device, a portable compact disc read-only memory (CDROM), an optical storage device, and a magnetic storage device. Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages.

Embodiments may also be implemented in cloud computing environments. In this description and the following claims, “cloud computing” may be defined as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned via virtualization and released with minimal management effort or service provider interaction, and then scaled accordingly. A cloud model can be composed of various characteristics (e.g., on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, etc.), service models (e.g., Software as a Service (“SaaS”), Platform as a Service (“PaaS”), Infrastructure as a Service (“IaaS”), and deployment models (e.g., private cloud, community cloud, public cloud, hybrid cloud, etc.).

FIGS. 7-9 illustrate an exemplary embodiment whereby a process to monitor location data related to a token is used to generate a three dimensional model instruction graphic. Location data can include a distance, an orientation, and other information that can be determined based upon an image of a token. Tasks are split between the exemplary device and server according to one embodiment of the disclosure. However, other embodiments are disclosed. A portable computerized device can perform all of the necessary programming to operate processes disclosed herein. Operation of the 3D model can be operated on either the server or the device.

A computerized system to display a three dimensional yoga instruction graphic to a plurality of users employing processes disclosed herein can be disclosed. The system includes a token and, for each of the plurality of users, a portable computerized device. The portable computerized device includes a camera device capturing an image, the image comprising location data related to a token configured to display a three dimensional yoga instruction graphic. The portable computerized devices each operate a three dimensional yoga instruction model based upon the location data, and display the three dimensional instruction yoga graphic based upon the three dimensional yoga instruction model. In an alternate embodiment, a remote server could perform some tasks for each of the devices, such as operating the three dimensional model, and the devices can each display individual outputs based upon communication with the server.

Throughout the disclosure, a camera device is disclosed as a device for localizing a portable computerized device to a token. Other devices and processes for providing location data to the device are envisioned, for example, utilizing radio frequency (RF ID) chips, a 3D map device, or inertial sensors within the device as spatial inputs to a model. Operation of these devices is known in the art and will not be disclosed in detail herein. Any of these alternative devices can be used in isolation or in cooperation with each other or with a camera device to provide or improve upon a spatial input to a three dimensional model.

The above description of illustrated examples of the present disclosure, including what is described in the Abstract, are not intended to be exhaustive or to be limited to the precise forms disclosed. While specific embodiments of, and examples for, the disclosure are described herein for illustrative purposes, various equivalent modifications are possible without departing from the broader spirit and scope of the present disclosure. Indeed, it is appreciated that the specific example values, times, etc., are provided for explanation purposes and that other values may also be employed in other embodiments and examples in accordance with the teachings of the present disclosure.

Claims

1. A portable computerized device configured to display a three dimensional instruction graphic, the device comprising:

a camera device capturing an image, the image comprising location data related to a token; and
wherein the portable computerized device is configured to display the three dimensional instruction graphic based upon the location data.

2. The device of claim 1, wherein the three dimensional instruction graphic comprises a yoga instruction program.

3. The device of claim 2, wherein the token comprises a yoga mat.

4. The device of claim 3, wherein the yoga mat comprises illustrations of yoga exercises.

5. The device of claim 1, wherein the three dimensional instruction graphic comprises a first aid instruction program.

6. The device of claim 1, wherein the first aid instruction program comprises a cardiopulmonary resuscitation instruction program.

7. The device of claim 1, wherein the three dimensional instruction graphic comprises a martial arts instruction program.

8. The device of claim 1, wherein the three dimensional instruction graphic comprises an assembly instruction program, illustrating to a user a process to install a newly purchased device.

9. The device of claim 1, wherein the three dimensional instruction graphic comprises a sports technique instruction program.

10. The device of claim 9, wherein the sports technique instruction program comprises a tennis instruction program.

11. The device of claim 9, wherein the sports technique instruction program comprises a golf instruction program.

12. The device of claim 1, wherein the three dimensional instruction graphic comprises an automotive repair instruction program.

13. The device of claim 1, wherein the three dimensional instruction graphic comprises a dance instruction program.

14. The device of claim 1, wherein the three dimensional instruction graphic comprises a music instruction program.

15. The device of claim 1, wherein the three dimensional instruction graphic comprises a medical procedure instruction program.

16. A computerized system to display a three dimensional yoga instruction graphic to a plurality of users, the system comprising:

a token; and
for each of the plurality of users, a portable computerized device: comprising a device providing spatial inputs based upon a location of the token; operating a three dimensional yoga instruction model based upon the spatial inputs; and displaying the three dimensional instruction yoga graphic based upon the three dimensional yoga instruction model.

17. The system of claim 16, wherein the device providing spatial inputs comprises a camera device capturing an image, the image comprising location data related to the token.

18. The system of claim 16, wherein the device providing spatial inputs comprises a location device selected from a radio frequency chip device, a three dimensional map device, and an inertial sensor device.

19. A computerized process for displaying a three dimensional instruction graphic, the process comprising:

within a computerized processor: operating a three dimensional instruction model; monitoring an image providing location data for a token; and utilizing the location data as spatial inputs to the three dimensional instruction model; and
upon a computerized display device, displaying the three dimensional instruction graphic based upon the three dimensional instruction model.

20. The computerized process of claim 19, further comprising capturing the image with the computerized display device comprising a portable computerized device; and

wherein the three dimensional instruction graphic changes as the portable computerized device is moved relative to the token.
Patent History
Publication number: 20140078137
Type: Application
Filed: Sep 13, 2013
Publication Date: Mar 20, 2014
Inventors: Nagabhushanam Peddi (Ann Arbor, MI), Stephen PHillip Alvey (Ann Arbor, MI)
Application Number: 14/026,870
Classifications
Current U.S. Class: Three-dimension (345/419)
International Classification: G06T 15/00 (20060101);