Procedurally expressing graphic objects for web pages

- Apple

A graphics object can be expressed using procedural language embedded in a markup language document. In an embodiment, a drawing space is specified in markup language. A drawing command to arbitrarily draw a graphics object into the drawing space is specified in procedural language. Interpretation of the markup and procedural language commands results in the rendering of the arbitrary graphics object. In another embodiment, there is a browser comprising a rendering engine, an interpreter, and parser. The rendering engine is configured to interpret a markup language instruction that specifies a drawing space as well as drawing commands in procedural language for drawing an arbitrary graphical object into the drawing space. The parser can then parse the drawing commands and convert them into an execution tree of tree objects.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 13/529,579 filed Jun. 21, 2012, which is a continuation of U.S. patent application Ser. No. 11/144,384 filed on Jun. 2, 2005, now issued as U.S. Pat. No. 8,239,749, which claims the benefit of U.S. Provisional Patent Application No. 60/583,125 filed on Jun. 25, 2004, each of which is hereby incorporated by reference in its entirety.

BACKGROUND

Field of the Invention

The present disclosure relates in general to computer graphics and in particular to procedurally expressing arbitrary graphic objects in markup language documents.

Background of the Invention

Web pages are created using markup languages such as HTML (HyperText Markup Language), XHTML (Extensible HyperText Markup Language), and SGML (Standard Generalized Markup Language). Designed to be interpreted by different browsers, markup languages allow for a diversity of content to be expressed in a relatively simple and static code structure. While powerful, markup languages are often not well-suited for supporting dynamic, scalable, and complex graphics. As a result, most website images comprise rasterized graphic objects using such formats as .GIF or .JPEG.

Graphic formats such as vector graphics offer a number of advantages over rasterized graphics. Vector graphic images are generated by interpreting a series of vectors, or path descriptions, and stroking or filling those paths. The resulting images are fully resolution-independent and scalable and therefore, unlike rasterized images, can be scaled up or enlarged while maintaining the same quality. Formats for three-dimensional graphics like OpenGL and Direct3D as well as other formats currently offered and under development are similarly procedural in nature, and thus are not naturally described in markup language. In addition to being scalable, vector graphics and related graphic formats also allow for dynamic rendering. This capability allows for interactivity and also permits equivalent files to be more compact since graphical images and scenes are generated just prior to their display.

These and other benefits make vector graphics, OpenGL, and other formats well-suited for use in web pages. However, existing approaches to providing such arbitrary formats on the web have significant drawbacks. Flash vector graphic files, for instance, are bulky and typically can't be accessed unless the user downloads a plug-in containing a rendering engine equipped with special capabilities. Previous attempts to create a 3D markup language, notably VRML (Virtual Reality Modeling Language) have as yet been unsuccessful. In addition, many graphics concepts such as iteratively drawing paths are more naturally described in procedural language rather than using the markup interface such as that used by VRML or SVG. Although adding procedural commands, scripted for instance in JavaScript, to web pages may enable the dynamic manipulation of images, it still does not allow for the drawing of arbitrary images into a web page or confer the other advantages associated with arbitrary graphic formats. Thus, what is needed is a way to leverage existing graphics and rendering capabilities using a procedural interface to create graphics objects for use in websites.

SUMMARY OF THE INVENTION

The present invention relates to a novel approach to creating graphics object for website applications. As used throughout this disclosure, the term “arbitrary graphics object” or AGO refers to any graphical output rendered procedurally, including, but not limited to, a two or three dimensional image or scene, produced based on the execution of procedural commands. The execution of the commands may be carried out in a graphics context that supports vector graphics, Scalable Vector Graphics, OpenGL or other types of existing and emerging graphics platforms, or may also utilize more conventional graphics formats such as Postscript, TIFF, PDF, PICT, BMP, WMF, GIF, JPEG, PNG, and EPS.

In an embodiment, a drawing area into which anything can be drawn using drawing commands is described in a markup language. The AGO is then expressed in the form of arbitrary drawing commands, such as those provided in vector graphics, to draw into the drawing area. According to one embodiment of the invention, a markup language, such us HTML, is used to specify a graphical element, referred to throughout the disclosure as a “canvas.” A procedural language such as JavaScript is used to draw into that graphical element. Also created is a context object that can render into the canvas using a paintbrush-like metaphor. Any graphics language can be use to specify the graphical content to be drawn within the element or canvas; such language can include vector graphics commands such as pathing, stroking, and filling. The canvas itself may also be manipulated in terms of other markup constructs such as Content Style Sheets (CSS). During an event loop, the procedural commands are translated into graphics code, which is executed to dynamically generate the graphics object. The object is then composited for display. This series of steps can be used to arbitrarily render scenes and images on the fly using graphics concepts such as masking, pathing, and transparency. The resulting arbitrary graphics object may be resolution-independent and fully scalable, often consumes less space than conventional graphics elements, and can utilize existing and emerging graphics and rendering capabilities.

Although reference throughout this disclosure is made to particular operating platforms, graphics, web browsers, and such technologies, the methods and systems of this disclosure may be advantageously implemented using a variety of existing and emerging graphics, browser, and related technologies in a variety of different operating environments.

In an embodiment, an arbitrary graphics object is expressed in computer code. A drawing space is specified in a markup language, and a drawing command is specified in a procedural language to draw the arbitrary graphics object into the drawing space. In another embodiment, there is a computer program product comprising instructions for specifying a graphics object. The instructions include a command in markup language for defining a drawing space, and a command in scripting language for drawing the arbitrary graphic object. In an embodiment, there is also an instruction for specifying a command in the procedural language to retrieve the drawing space.

In another embodiment, a graphics object can be expressed using an interactive user interface. In response to input from the user, a markup language command that specifies a height dimension and a width dimension of a drawing space is coded. In addition, scripting language commands are coded for arbitrarily drawing the graphics object in the drawing space, responsive to user input representing the object.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 depicts a high level view of the operating environment in and elements with which a graphics object can be expressed in accordance with an embodiment of the invention.

FIG. 2 depicts a flow chart of steps to code a sample AGO into a markup page.

FIG. 3 illustrates the steps performed by a browser to create an AGO in a website during the process of rendering a markup language page.

FIG. 4 depicts a sample vector graphics image generated using the techniques described herein.

FIG. 5 is a screen shot of a user interface that could be used to create a graphics object.

FIG. 6 is a flow chart of the steps for painting an image.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

Operating Environment

FIG. 1 depicts a high level view of the operating environment in which an arbitrary graphics object can be procedurally expressed in accordance with an embodiment of the invention. Shown in FIG. 1 are a browser 110, an arbitrary graphics library (AGL) 120, a markup language page 130, and a webpage 140. The browser 110 is a conventional or emerging browser such as a Safari, Netscape, IE Explorer, or Mozilla browser, and contains a rendering engine 112, interpreter 116, and a parser 118. The AGL 120 is a library of commands associated with an arbitrary graphics (AG) format such as vector graphics, OpenGL, or other graphic library exposed with an application interface. The markup language page 130, to be interpreted by the browser 110, contains a description of an arbitrary graphics object (AGO) and is written in any conventional or emerging markup language such as HTML, XHTML, or XML (extensible markup language).

Contained in the page is a markup language tag identifying the AGO and commands written in a procedural language (PL) such as JavaScript, Visual Basic, or Python, that describe how the AGO is to be generated. The page may also contain formatting or other commands in CSS or other markup language construct to describe how the AGO is to be rendered. The browser 110 executes the markup language page, using in part calls from the AGL 120, and produces the web page containing the AGO. In an embodiment, the AGO comprises additional content described in markup language. The content may comprise any text, web content, a graphics element, or non-graphical content. This content may be described in markup or procedural language, as discussed below in reference to FIG. 2. The browser 110 executes the language describing the content as part of the markup page, retrieving or passing in the content as needed.

As one of skill in the art would know, one or more elements of FIG. 1 including the browser 110 and markup language page 130 may be displayed, coded, created or processed on one or more hardware elements. Similarly, one or more of the step and methods described in this specification may be carried out using such elements. Such hardware components, such as a display device, processor, and an input device such as a keyboard or mouse, including their operation and interactions with one another and with a central processing unit of the personal computer, are well known in the art of computer systems and therefore are not depicted here. In addition, although the methods described herein are primarily disclosed in the context of a browser, in various alternatives they may be carried out by various computer or other applications including an application for a desktop, laptop, or handheld computer or handheld device, a game application, or a graphics application. In another embodiment, an application that can interpret one or more markup languages such as HTML, XHTML, and XML may be used.

As described above, the browser 110 includes a rendering engine 112, an interpreter 116, and a parser 118. The rendering engine 112 is built on conventional or emerging rendering and layout engine technology such as used by the Gecko engine of Mozilla, or Webkit of OSX and interprets and renders the markup page. The rendering engine 112 includes an interpreter 116 for interpreting the PL and for interpreting the markup instructions contained on the markup page 130 into an intermediate form for execution. Specifically, the interpreter 116 can translate the PL code describing the AGO into AG commands from the AG library 120 in order to create and render the AGO. The interpreter 116 may accomplish this translation with reference to a mapping that correlates PL to AG code. The browser 110 also contains a parser 118 that will parse the markup source and create an element for the AGO in an object tree. The rendering engine 112 interfaces with the AGL 120 in order to output the AGO. The AGL 120 is a graphics library that contains graphics calls for generating 2-D or 3-D graphics images. For instance, in the Mac OSX environment, the AGL could comprise the CoreGraphics library. On the other hand, if a Mozilla browser is being used, the AGL could comprise a library of calls to an abstract, platform-independent, vector graphics language for doing low-level rendering operations. Other exemplary graphics languages include GDI on Windows. The AGL 120 may reside in the operating environment of the browser and/or may be connected to the browser through an abstraction layer of the browser. The AGL 120 supplies calls to the browser that can then be interpreted by the interpreter to generate a 2-D or 3-D image, scene, or other graphics object. How the browser 110 and AGL execute the markup language page 130 to generate the web page output containing the AGO is be described in greater detail with reference to FIGS. 3 and 4 below.

Expression of an AGO

As described above, the AGO can be expressed in both markup and procedural language. FIG. 2 depicts a flow chart of steps to code a sample AGO into a markup page. At a high level, there are four steps in this process. The first is to specify 210 a markup tag for the AGO that defines a two- or three-dimensional graphical space for the AGO, referred to herein as a canvas. In an embodiment, the canvas could potentially be any graphical shape of any dimensions. It could also comprise a bitmap or mask. The markup tag describes the width, height and position in the markup language document of the canvas element. The second is to create a command in procedural language to retrieve 220 the canvas element. From the canvas element a drawing object, known as a context object, is retrieved 230 to perform the drawing functions associated with the AGO. Once creation of the canvas and retrieval of a context object have been specified, the last step is to code 240 drawing commands to create the AGO. For instance, in the case of an exemplary 2-D image for instance, the script specifies a color to be used to draw the outline of the image, then to add lines or curves associated with the image, and then stroke and fill to generate the image.

In an embodiment, the resulting AGO comprises additional graphical, textual, web, or other content described in markup language. The method described above can be modified in order to include this content in at least two ways. In one embodiment, markup language can be embedded inside the canvas element. The step of specifying the canvas element could include specifying child attributes of the canvas element that are associated with the additional content. This could be accomplished using code resembling:

<canvas id=‘mycanvas’ width=‘100’ height=‘100>  <div>additional content</div> </canvas>

In another embodiment, the additional content is added using procedural commands. A method for passing a DOM document object, document fragment, or other content object, for instance, to be rendered inside of the canvas could be defined. A command such as:
    • Document.getElementById(‘mycanvas’).getContext(“2d”).drawDocumentFragment (some_frag, x, y, width, height)
      could be used. As one of skill in the art would know, additional content may also be contained in the markup page separately from the canvas element or drawing commands associated therewith, however coding this content in the canvas element has several advantages. These include the ability to procedurally call the content and to define the AGO as a discrete series of canvas commands.

The steps described above could be implemented by directly coding the commands into a scripting language using any markup or text editor. Alternatively, these steps could also be accomplished through use of a graphics toolkit editor. A toolkit editor including a programming or coding engine could translate user inputs into a markup tag specifying the canvas of a certain size and dimension. It could also automatically code retrieval of a context object to carry out the drawing commands. The toolkit could also include a set of pre-generated arbitrary graphic image objects that could be added to the canvas using drag-and-drop functionality. When the pre-generated objects were added, for instance, the toolkit could specify procedural commands to represent the objects. A toolkit could also include various interfaces to represent controls for the management of various parameters.

For instance, a user could use a graphical interface to designate a drawing space with a width and height dimension using any conventional method, for instance by selecting an image of or mathematically defining a shape such as a rectangle, square, or circle. This input would result in the coding of a markup language command for specifying the drawing space. Once a user then specifies an object to be put onto the drawing space, for instance by pre-selecting a dynamic object such as, for example, a clock or billowing clouds, scripting language commands for arbitrarily drawing the graphics object in the drawing space are coded, in an embodiment, by retrieving a drawing object to draw the graphics object.

A screenshot of one tool for managing several parameters to draw an arbitrary graphics object is illustrated in FIG. 5. Using the interface of FIG. 5, keystroke and mouse and keyboard commands entered by the user can be used to change the background colors, control the distance of the offset and the angle of the shadow in the images, alter the blur radius and global alpha channel, move the canvas or drawing space, and animate the images.

Each of the steps of FIG. 2 is described in greater detail below with reference to an embodiment of the invention. As shown, the first step is to specify 210 the canvas element. The canvas element represents a resolution-dependent bitmap canvas, which can be used for rendering graphs, game graphics, or other visual images on the fly. When authors use the canvas element, they also provide content that, when presented to the user, conveys essentially the same function or purpose as the bitmap canvas. This content may be placed as content of the canvas element.

The canvas element may be defined by way of a markup language tag that is included in a markup language page to specify where the user wants drawing to occur. Height and width attributes are defined to control the size of the coordinate space, and in the case or a three-dimensional space, a length dimension is also specified. The value can be expressed either as a fixed number of pixels or a percentage of the window height. An additional id attribute may also be included that specifies a unique value of a canvas object identifier. In an embodiment, the tag may be placed anywhere in the markup language page. More than one canvas may be included in a single web page or widget as long as the id attribute of each is unique. For example, to define a canvas, code such as the following code could be used:

<body> <canvas id=“MyCanvas” width=‘100’ height=‘100’ style=“position:absolute; left:0px; top:0px; z index:1”/> </body>

Once the canvas element has been specified, a command in procedural language is coded to retrieve 220 the canvas element. The canvas attribute returns the canvas element that the context paints on. To draw on the canvas element, in an embodiment, authors first obtain a reference to a context using a getContext method of the canvas element, described in greater detail below. Any of a number of two- and three-dimensional contexts may be defined and used with a getContext method. When the getContext method of a canvas element is invoked, a drawing object known as a context object is returned 230.

In an embodiment, a getContext method may be used to retrieve a 2D context object. In an embodiment, the procedural language is JavaScript and the 2D object manages the graphics state information for the canvas and exposes a set of methods that you can call from your JavaScript code to draw onto the canvas. To obtain an instance of the 2D context object for a particular canvas, the getContext method of the canvas object is called with the string “2D” as a parameter. The following example shows part of a JavaScript function to handle the drawing for a canvas. The function uses the Document Object Model (DOM) to obtain the canvas object and then calls the getContext method to get the 2D context object for the canvas.

function MyJavaScriptFunction( ) { var canvas = document.getElementById(“MyCanvas”); var context = canvas.getContext(“2d”); // Draw content here ... }

In this example, the body of the web page would include a canvas tag whose id attribute was set to “MyCanvas”. A separate 2D context object can be obtained for each of multiple canvases on a webpage.

In an embodiment, each canvas maintains a stack of graphics states. A save method may be defined that saves the current graphics state to the top of the graphics state stuck. In an embodiment, the following graphics state parameters are saved when the method is called: transformation matrix, the current clip region, and the current values of several attributes, including stroke style (strokeStyle), fill style (fillStyle), alpha value (globalAlpha), the line width (lineWidth), the line cap (lineCap), the line join (lineJoin), the miter limit (miterLimit), and shadow values (shadowOffsetX, shadowOffsetY, shadowBlur, shadowColor). To restore a drawing environment to a previously saved state, a restore method may be specified. When this method is called, the canvas removes the most recently saved graphics state from the top of the stack and uses that state's saved settings for the current graphics state.

Using these methods, the following exemplary set of steps could be used to paint a blue shape, then a green shape, then a blue shape, by saving and restoring the graphics state.

    • 1. Modify the graphics state by changing the fill color to blue.
    • 2. Save the graphics state.
    • 3. Fill a shape—the shape is painted with blue.
    • 4. Set the fill color to green.
    • 5. Fill a shape—the shape is painted with green.
    • 6. Restore the graphics state.
    • 7. Fill a shape—because the graphics state has been restored to the state at the time it was previously saved, the shape is painted blue.
      In the embodiment described, not all aspects of the current drawing environment are elements of the saved graphics state. For example, the current path is not saved when the save method is called.

According to an embodiment of the invention, objects drawn can be transformed using a various methods. The current transformation matrix (CTM) specifies the mapping from device-independent user space coordinates to a device space. By modifying the current transformation matrix, objects may be modified, for instance scaled, translated, or rotated. In an embodiment, in order to transform an object in a graphics context, the coordinate space of the context must be transformed by calling a method prior to drawing the object. For example, to rotate an image, a rotate method is called to rotate the coordinate space of the context before drawing the image. The magnitude and direction of the rotation can be set by specifying an angle of adjustment parameter in radians. When the image is drawn, the canvas draws to the window using the rotated coordinate system. To restore the previous coordinate space, the graphics state is saved before modifying the CTM, and restored after drawing. A scale method may also be defined comprising two parameters—an sx parameter containing a float value with the x-axis scale factor and an sy parameter containing a float value with the y-axis scale factor. In addition, a translate method can be used to change the origin of the canvas coordinate system. A tx parameter contains a float value with the x-axis translation value and a ty parameter contains a float value with the y-axis translation value.

Compositing attributes may be used to specify various characteristics of the graphics object. In an embodiment, a GlobalAlpha attribute is defined which specifies the color or style the canvas applies when filling paths. If the fill style comprises a color, it may be set forth in several different ways depending on the color space intended to be used. For web-safe colors, a web color specification string of the form “#RRGGBB”, which represents an RGB color using hexidecimal numbers, may be used. To specify an alpha, a CSS rgba (r, g, b, alpha) functional-notation style may be used. Float values between 0 and 255 for the r, g, and b parameters can be specified, and float values between 0.0 and 1.0 indicating the alpha channel value, determine the opacity of the color. Using methods described in further detail below, in an embodiment, a fill style may also comprise a gradient or pattern.

A GlobalCompositeOperation attribute may be defined which determines how the canvas is displayed relative to any background content. A string parameter identifies the desired compositing mode. If this value is not set explicitly, the canvas uses a default compositing mode. Table 1 lists some exemplary compositing operators. When used with this property, the source image refers to the canvas and the destination image refers to the web view.

TABLE 1 Operator Description Copy Displays the source image instead of the destination image. Darker Display the sum of the source image and destination image, with color values approaching 0 as a limit. Destination- Display the destination image wherever both images are opaque. Display atop the atop source image wherever the source image is opaque but the destination image is transparent. destination-in Display the destination image wherever both the destination image and source image are opaque. Display transparency elsewhere. Destination- Display the destination image wherever the destination image is opaque and the out source image is transparent. Display transparency elsewhere. Destination- Display the destination image wherever the destination image is opaque. Display the over source image elsewhere. Lighter Display the sum of the source image and destination image, with color values approaching 1 as a limit. source-atop Display the source image wherever both images are opaque. Display the destination image wherever the destination image is opaque but the source image is transparent. Display transparency elsewhere. source-in Display the source image wherever both the source image and destination image are opaque. Display transparency elsewhere. source-out Display the source image wherever the source image is opaque and the destination image is transparent. Display transparency elsewhere. source-over Display the source image wherever the source image is opaque. Display the destination image elsewhere. Xor Exclusive OR of the source and destination images. Works only with black and white images and is not recommended for color images.

Colors or styles may be applied by the canvas when stroking paths. A strokestyle property may be defined that sets the stroke style parameter of the graphics state. Colors can be set in any of a variety of different ways depending on the color space to be used. For web-safe colors, a web color specification string of the form “#RRGGBB”, which represents an RGB color using hexidecimal numbers, may be used. As described above, alpha, gradient or pattern values may also be specified. A fillStyle property may also be used to indicate the alpha channel value representing the opacity of content drawn on the canvas. The range of values could be between 0.0 (fully transparent) and 1.0 (no additional transparency). The canvas uses the alpha value in the current graphics state to determine how to composite newly painted objects.

Various line attributes may also be specified. For instance, a linewidth attribute, defined as a positive, nonzero float value, indicates the line width for drawing operations. The width of lines and curves drawn by the canvas may be controlled by modifying the line width property of the graphics state. The line width is the total width of the line, expressed in units of the user space. The line surrounds the center of the path, with half of the total width on either side. A linecap attribute may also be specified to determine the end style used when drawing a line. In an embodiment, the string “butt” represents a flat edge that is perpendicular to the line itself, the string “round” indicates round endpoints, and “square” for square endpoints. Similarly, a linejoin attribute determines the join style between lines, whether, for example, round, beveled, or miter. In an embodiment, a mitrelimit attribute provides a new miter limit to specify how the canvas draws the juncture between connected line segments. If the line join is set to “miter”, the canvas uses the miter limit to determine whether the lines should be joined with a bevel instead of a miter. The canvas divides the length of the miter by the line width. If the result is greater than the miter limit, the style is converted to a bevel.

In an embodiment, the canvas may display a shadow, which can be defined through various attributes. For example, a shadowColor attribute designates the color the canvas applies when displaying a shadow. Setting this property results in the canvas setting the shadow color parameter of the graphics state. The shadow color can be set in several different ways depending on factors such as whether or not the shadow has an opacity. A shadowOffsetX attribute defines the distance, in coordinate space units, that a shadow should be offset in the positive horizontal direction, and a shadowOffsetY attribute defines the distance, in coordinate space units, that a shadow should be offset in the positive vertical direction. A shadowBlur attribute may be defined that represents the width, in coordinate space units, that a shadow should cover.

Methods may also be defined to draw shapes such as rectangles, squares, and circles to the drawing context. In an embodiment, a clearRect method paints a transparent rectangle. When this method is called, the canvas effectively “erases” the contents of the specified rectangle. The parameters of this method all contain float values. A fillRect method paints the area within the specified rectangle. This method uses the current fill color to paint the area of the specified rectangle. The parameters of this method all contain float values. As a side effect of calling this method, the canvas clears the current path. Finally, a strokeRect method paints an outline of a rectangle. This method uses the current stroke color to paint the path represented by the specified rectangle. The parameters of this method all contain float values. Altering the appearance of the painted outline can be accomplished by modifying attributes of the graphics state including the line width, the line join, the miter limit, the line dash pattern, the stroke color space, and the stroke color.

In an embodiment, a current path is always associated with the context. A path is comprised from a set of subpaths, each of which is a list of one or more segments, either straight lines or curves. A canvas has only a single path in use at any time. Therefore, if the specified context already contains a current path when this method is called, the canvas replaces the previous current path with the new path. Paths may be used to draw both simple shapes (for example, lines, circles, or rectangles) and complex shapes (such as the silhouette of a mountain range) in a canvas. A path can be used to both draw the outline of a shape and fill the inside of a shape. In an embodiment, before painting a shape, the shape is created using the current path.

Several exemplary path methods may be defined. Fr instance a beginPath method creates a new empty path in the canvas. A moveTo method begins a new subpath at a specified point specified with the x and y parameters. The point is defined to be the “current” point, and it defines the starting point of the next line segment. The canvas may set the current point explicitly, when the method is called to begin a new subpath at a given point. Alternatively, the current point may be set implicitly, when a new curve or straight line segment is added to the subpath. After adding the segment, the current point is reset from the beginning of the new segment to the endpoint of that segment. A closePath method closes and terminates an open subpath. When a subpath is open and this method is called, the canvas closes the subpath (draws a straight line that connects the current point to the starting point), and terminates the subpath (the current point is no longer defined). A lineTo method appends a straight line segment from the current point to the point specified.

Straight line segments, cubic and quadratic Bézier curve segments, and rectangles can be used to specify a path. A single straight line segment can be appended to the current subpath using this method. After adding the line segment, the current point is reset from the beginning of the new line segment to the endpoint of that line segment, as specified by the x and y parameters. A quadraticCurveTo method appends a quadratic Bézier curve to the current path. A quadratic curve segment has a start point, one control point, and an endpoint. The start point is the current point of the canvas. The cpx and cpy parameters specify the control point. The x and y parameters specify the new endpoint. After adding the segment, the current point is reset from the beginning of the new segment to the endpoint of that segment. A bezierCurveTo method can be used to append a cubic Bézier curve to the current path. A cubic curve segment has a start point, two control points, and an endpoint. The start point is the current endpoint of the open path. The cp1x, cp1y, cp2x, and cp2y parameters specify the two control points for the path. The x and y parameters specify the new endpoint for the path. After adding the segment, the current point is reset from the beginning of the new segment to the endpoint of that segment.

In an embodiment, an arcTo method adds an arc of a circle to the current subpath, using a radius and tangent points. This method draws an arc that is tangent to the line from the current point to (x1, y1) and to the line from (x1, y1) to (x2, y2). The start and end points of the arc are located on the first and second tangent lines, respectively. The start and end points of the arc are also the “tangent points” of the lines. If the current point and the first tangent point of the arc (the starting point) are not equal, the canvas appends a straight line segment from the current point to the first tangent point. After adding the arc, the current point is reset to the endpoint of the arc (the second tangent point). An arc method adds an arc of a circle to the current subpath. The arc is built based on the circle whose origin and radius are specified by the x, y, and radius parameters. The startAngle parameter specifies the angle of the starting point of the arc, measured in radians from the positive x-axis. The endAngle parameter specifies the angle of the endpoint of the arc, measured in radians from the positive x-axis. If the current path already contains a subpath, the canvas appends a straight line segment from the current point to the starting point of the arc. If the current path is empty, the canvas creates a new subpath for the arc and does not add an initial line segment. After adding the arc, the current point is set to the endpoint of the arc.

A rect method adds a new subpath, consisting of a single rectangle, to the canvas. The parameters for this method all contain float values. A fill method paints the area within the current path, using the nonzero winding-number fill rule. The fill color is an attribute of the graphics state. When the current path is filled, the canvas fills each subpath independently. Any subpath that has not been explicitly closed is closed implicitly by the fill routines. The fill rule used by this method is called the nonzero winding number rule, described in greater detail in Appendix A. A stroke method paints a line along the current path. To modify the behavior of this method, any of a variety of graphics state properties may be changed including line width, line join, line cap, miter limit, line dash pattern, stroke color space, or stroke color. A clip method sets the current clipping path, using the nonzero winding number rule. This method uses the nonzero winding number rule to calculate the intersection of the current path with the current clipping path. The canvas then uses the path resulting from the intersection as the new current clipping path for subsequent painting operations. After determining the new clipping path, the method resets the current path to an empty path.

A drawImage method may be defined to draw images onto the canvas. This method is overloaded with three variants, used to draw the contents of a JavaScript Image object into the context. The first of these, drawImage(image, x, y), draws the image at the x and y coordinates within the context. The image is sized as it is in the object. The second, drawImageFromRect(image, x, y, width, height), is where x, y, width, and height parameters contain integer values representing the bounding rectangle for the image. These values are specified in the coordinate system of the canvas and should always lie wholly within the canvas bounds. If they lie outside the canvas bounds, the image will be clipped. The third method, context.drawImageFromRect(image, sx, sy, swidth, sheight, dx, dy, dwidth, dheight), draws the portion of the image specified by the source rectangle (sx, sy, swidth, and sheight) onto the canvas at the specified destination rectangle (dx, dy, dwidth, dheight). The source rectangle is specified in the image coordinate space and the destination rectangle is specified in the canvas coordinate space. Rectangle coordinates preferably are expressed as integer values. The image parameter must contain a valid JavaScript Image object.

Based on the above, in an embodiment of the invention, when a drawing operation is performed or an image is painted, the sequence depicted in FIG. 6 takes place. The current transformation matrix is applied 610 to the present coordinates, for example, a translation or rotation. The painting operations and/or images are rendered 620 to an intermediate image. Shadow is then applied 630 to the intermediate image. creating a resulting image. The resulting image is then modified 640 according to the GlobalAlpha (i.e. color or style) value. Finally, taking into account the current clip region, the resulting image is composited 650 into the current bitmap or other page using the specified composite operator.

Expressing the World Clock Gadget

One example for creating a vector graphics object of a World Clock Gadget, several examples of which are illustrated in FIG. 4, is now described in detail. Throughout this disclosure the term “gadget” is used interchangeably with the word “widget.” The image generated represents a gadget for use for example in a dashboard. An exemplary dashboard and its functionality is described in the commonly owned and co-pending U.S. Patent Application entitled “Unified Interest Layer For User Interface”, to Chaudhri, et. al. filed Jun. 25, 2004, incorporated by reference in its entirety in this application. As described above, the first step is to set up a drawing region or canvas. The world clock object expresses this region with the following code:

    • <canvas id=‘canvas’ width=‘172’ height=‘172’></canvas>
      The attributes of the canvas specified are id, width, and height. The id attribute is an arbitrary identifier used to target this particular canvas when drawing. The width and height attributes specify the size of the canvas region. The style attribute specifies the position of the canvas within the context of the gadget.

Next, the code obtains the canvas and its drawing context. The context handles the actual rendering of the content. The World Clock gadget does this in its drawHands( ) function:

function drawHands(hoursAngle, minuteAngle, secondsAngle) {   var canvas - document.getElementById(“canvas”); var context = canvas.getContext(“context-2d”);

This function draws the hour, minute, and second hands on the face of the World Clock. As parameters, it takes the angles at which the three hands should be rotated as passed in by its caller. After that, it queries the environment for the previously created canvas. It does this using the unique identifier supplied in the id attribute in the <canvas> tag.

Once the canvas has been acquired, its context is obtained via the canvas.getContext(“context-2d”) method and assigned to the context variable. From this point on, all operations intended for the canvas will be called on context.

    • context.clearRect(0, 0, 172, 172);
      The first operation performed on the canvas clears it off. As the drawHands( ) function is called every second, it is important to clear it off each time, so that the previously drawn configuration doesn't conflict with the new configuration. The entire region, as defined in the <canvas> tag, is cleared.
    • context. save( )
    • context.Translate(172/2, 172/2);

Next, the state of the original context space is saved so that it can be restored later. In the original context, the origin (the 0,0 coordinate) of the canvas is in the lower left corner. Upon completion of the upcoming drawing code, the user may want to return to this context, so it should be saved. The origin of the context space is then translated to the center of the canvas. This is done since the hands of the clock rotate around this point, and to facilitate the drawing commands.

    • context.save( )
    • context.rotate (hoursAngle);
    • context.drawImage (hourhand, −4, −28, 9, 25, “source-over”);
    • context.restore( );
      This exemplary code draws the hour hand on the face of the clock. First, a copy of the current context (with the origin at the center of the clock face) is saved, so that it can be restored later. The entire context is then rotated, so that the y-axis aligns itself with the angle that the hour hand should point towards. Then, the hour hand image is drawn. The method drawImage( ) has six parameters: the image to be drawn, the x and y coordinate for the bottom left hand corner of the image, the width and height of the image, and finally, the compositing operation to be used when rendering the image. While the image is drawn as going straight up within the graphics context, the context itself has been rotated to be at the correct angle for the hour hand. While the code shown reflects that a compositing mode parameter is used to implement the drawl mage method, as known to one of skill in the art, a user may alternatively set the global compositing property as part of a two- or three-dimensional context.

Once the hand has been drawn, the last saved context is restored. This means that the context that was saved four lines prior, with its origin at the center of the canvas, but not yet rotated, will be the active context again.

    • context.save( )
    • context.rotate (minutesAngle);
    • context.drawImage (minhand, −8, −44, 18, 53, “source-over”);
    • context.restore( );
      A similar procedure is used to draw the minute hand on the face of the clock. The differences this time are in the angle the context is rotated to and the size of the minute hand. The context is again saved, rotated, and then restored to its previous state, so that the next element to be drawn can work independent of the rotation needed for the minute hand.
    • context.rotate(secondsAngle);
    • context.drawImage(sechand, −4, −52, 8, 57, “source-over”);
      Finally, the second hand will be drawn. The context does not need to be saved and restored. This is because this is the last time anything will be drawn in this particular context (with the origin at the center of the canvas), so the memory needed to save and restore the context can be saved. Expressing the clock through these commands makes the size of the resulting file smaller than, for instance if the clock were expressed in conventional frame animation techniques and/or through individual images that represented each moment in time.
      Rendering an AGO

In an embodiment, an AGO with several distinct elements, such as visual images, timing, and animation effects, has been coded into a markup language page. FIG. 3 illustrates the steps of rendering the AGO in the resulting web page. The page can be interpreted by any application with the ability to interpret markup language. The application may comprise a browser, a graphics application, a game application, a desktop application, or other application. In an embodiment, the markup language page is interpreted by a unified interest layer or dashboard application such as described in U.S. patent application Ser. No. 10/877,968 entitled “Unified Interest Layer For User Interface”, to Chaudhri, et. al. filed Jun. 25, 2004. The process begins with loading 310 the markup language page and beginning 320 the event loop. The markup page is parsed 330 and each tag is represented as a DOM (Document Object Model) element in a tree data structure. Each element in the tree is assigned a corresponding rendering object. The parser recognizes the html tag for the canvas elements and scripting commands. The scripting commands associated with code in a script element are passed 340 to the interpreter. The interpreter parses the commands and converts them into an execution tree 350.

Although specifics vary by implementation, the rendering engine will generally create 360 one or more render objects associated with each canvas element in a render tree. The interpreter will evaluate the execution tree and apply commands to the render objects associated with the canvas element. In one embodiment, the context object does this by referring to a mapping between the procedural language and the underlying AGL. In one example, these steps take place in Apple OSX operating environment, the procedural commands are coded in JavaScript, and the underlying graphics library is the CoreGraphics library. The graphics commands may exploit the functionality offered by the AGL such as stroking, filling, pathing, rendering curves, transparency, alpha channel management, and other functions described in Appendix B. In the case of a three-dimensional AGO, the commands may include commands to construct a cube or other 3D object, to control the lighting, camera position, textures, and to cast movement shadows.

Once the render object associated with the AGO element has been created, the rendering engine iterates 370 over the tree and instructs 370 each render object to draw itself. The rendering engine draws into an off screen bitmap which represents the canvas elements. Depending on the nature of the AGO, the AG commands may be executed immediately during processing by the AG interpreter to an off screen bitmap. In another embodiment, however, drawing operations are collected during processing and are not applied to the off screen bitmap by the rendering engine until the end of the event loop 380. At the end of the event loop, the off-screen bitmap is composited 390 into the onscreen rendering surface to be displayed. Additional scripting or timing of event loops based on certain triggers or external events is also possible. The compositing step can be performed by pre-existing or emerging compositing technologies.

In a JavaScript implementation, another construct, a JavaScript image object, may exist that can communicate with both the corresponding AGO tree node and AGO render objects. In such an implementation, the step of using the canvas object to draw an image is accomplished by creating a JavaScript image object and then compositing that image into the canvas, thus leveraging the benefits of existing JavaScript technology. One of these benefits includes the capability to send an event when the image is ready to render, which overcomes problems associated with the asynchronous nature of the loading image data over the Internet. However, in other implementations, such as in a Windows or Mozilla or Gecko environment, certain implementation details may be different.

Besides being implemented through the scripting commands in the markup language page, the AGO may be further refined using other markup language constructs such as CSS. Formatting and other commands such as setting a border, setting a border width, positioning, transparency relative to the objects, and establishing a margin may be expressed and implemented on top of the procedural description.

The foregoing description of the embodiments of the invention has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above teachings. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto.

Claims

1. A computer-implemented method of drawing an arbitrary graphics object (AGO) into a drawing space in a web browser of a computing device using both markup language and coded procedural language received by the web browser, the method comprising:

receiving, by the web browser, markup language that, when executed, causes the drawing space to be rendered in a window of the web browser;
receiving, by the web browser, coded procedural language for the AGO that, when executed, causes the AGO to be drawn into the drawing space rendered in the web browser;
rendering, by the web browser, the drawing space by executing the received markup language and drawing the AGO into the drawing space by executing the received coded procedural language.

2. The method of claim 1, wherein the markup language and coded procedural language are received by the web browser in response to one of:

the web browser receiving one or more user inputs into an interface of the web browser to select a drawing space and an AGO for rendering in the web browser; or
the web browser reading a file containing the markup language and the coded procedural language.

3. The method of claim 2, wherein

receiving one or more inputs to select the AGO comprises receiving, by the web browser, user input to drag-and-drop a graphics image object into the drawing space; and
wherein the received coded procedural language to represent the graphics image object that is received and executed in response to receiving the one or more inputs to select the AGO.

4. The method of claim 1, wherein the received coded procedural language includes vector graphics drawing commands.

5. The method of claim 1, further comprising:

translating the received coded procedural language for the AGO into graphics code that is executable by an arbitrary graphics library; and
executing the graphics code to render the AGO.

6. The method of claim 1, wherein the received markup language comprises a tag and one or more parameters that specify:

a shape of the drawing space; and
a width and height, or a size, of the drawing space shape.

7. The method of claim 6, wherein the drawing space comprises a resolution-dependent bit-map canvas.

8. The method of claim 1, further comprising:

receiving, by the web browser, user input into the web browser indicating that the AGO is to be animated; and
creating, by the web browser, coded procedural language for the AGO that, when executed, causes the AGO to be animated.

9. A non-transitory computer-readable storage medium having instructions stored therein, which when executed by a computer, cause the computer to perform operations for drawing an arbitrary graphics object (AGO) into a drawing space in a web browser of a computing device using both markup language and coded procedural language, the operations comprising:

receiving, by the web browser, markup language that, when executed, causes the drawing space to be rendered in a window of the web browser;
receiving, by the web browser, coded procedural language for the AGO that, when executed, causes the AGO to be drawn into the drawing space rendered in the window of the web browser; and
rendering, by the web browser, the drawing space by executing the received markup language and drawing the AGO into the drawing space by executing the received coded procedural language.

10. The medium of claim 9, wherein the markup language and coded procedural language are received by the web browser in response to one of:

the web browser receiving one or more user inputs into an interface of the web browser to select a drawing space and an AGO for rendering in the web browser; or
the web browser reading a file containing the markup language and the coded procedural language.

11. The medium of claim 10, wherein

receiving one or more inputs to select the AGO comprises receiving, by the web browser, user input to drag-and-drop a graphics image object into the drawing space; and
wherein the coded procedural language to represent the graphics image object is received and executed, in response to receiving the one or more inputs to select the AGO.

12. The medium of claim 9, wherein the received coded procedural language includes vector graphics drawing commands.

13. The medium of claim 9, the operations further comprising:

translating the received coded procedural language for the AGO into graphics code that is executable by an arbitrary graphics library; and
executing the graphics code to render the AGO.

14. The medium of claim 9, wherein the received markup language comprises a tag and one or more parameters that specify:

a shape of the drawing space; and
a width and height, or a size, of the drawing space shape.

15. The medium of claim 14, wherein the drawing space comprises a resolution-dependent bit-map canvas.

16. The medium of claim 9, the operations further comprising:

receiving, by the web browser, user input into the web browser indicating that the AGO is to be animated; and
creating, by the web browser, coded procedural language for the AGO that, when executed, causes the AGO to be animated.

17. A system, comprising:

a processing system;
a memory coupled to the processing system; a non-transitory computer readable medium having instructions which when executed by the processing system cause the processing system to perform operations for drawing an arbitrary graphics object (AGO) into a drawing space in a web browser of a computing device using both markup language and coded procedural language received by the web browser, the operations comprising:
receiving, by the web browser, markup language that, when executed, causes the drawing space to be rendered in a window of the web browser;
receiving, by the web browser, coded procedural language for the AGO that, when executed, causes the AGO to be drawn into the drawing space rendered in the window of the web browser; and
rendering, by the web browser, the drawing space by executing the received markup language and drawing the AGO into the drawing space by executing the received coded procedural language.

18. The system of claim 17, wherein the markup language and coded procedural language are received by the web browser in response to one of:

the web browser receiving one or more user inputs into an interface of the web browser to select a drawing space and an AGO for rendering in the web browser; or
the web browser reading a file containing the markup language and the coded procedural language.

19. The system of claim 18, wherein

receiving one or more inputs to select the AGO comprises receiving, by the web browser, user input to drag-and-drop a graphics image object into the drawing space; and
wherein the received coded procedural language to represent the graphics image object is received and executed in response to receiving the one or more inputs to select the AGO.

20. The system of claim 17, wherein the received coded procedural language vector graphics drawing commands.

21. The system of claim 17, the operations further comprising:

translating the coded procedural language for the AGO into graphics code that is executable by an arbitrary graphics library; and
executing the graphics code to render the AGO.

22. The system of claim 17, wherein the received markup language comprises a tag and one or more parameters that specify:

a shape of the drawing space; and
a width and height, or a size, of the drawing space shape.

23. The system of claim 22, wherein the drawing space comprises a resolution-dependent bit-map canvas.

24. The system of claim 17, the operations further comprising:

receiving, by the web browser, user input into the web browser indicating that the AGO is to be animated; and
creating, by the web browser, coded procedural language for the AGO that, when executed, causes the AGO to be animated.
Referenced Cited
U.S. Patent Documents
4752893 June 21, 1988 Guttag et al.
5168441 December 1, 1992 Omaheim et al.
5289574 February 22, 1994 Sawyer
5297250 March 22, 1994 Leroy et al.
5357603 October 18, 1994 Parker
5388201 February 7, 1995 Hourvitz et al.
5481665 January 2, 1996 Okada et al.
5490246 February 6, 1996 Brotsky et al.
5522022 May 28, 1996 Rao et al.
5537630 July 16, 1996 Berry et al.
5602997 February 11, 1997 Carpenter et al.
5638501 June 10, 1997 Gough et al.
5651107 July 22, 1997 Frank et al.
5657049 August 12, 1997 Ludolph et al.
5671343 September 23, 1997 Kondo et al.
5692205 November 25, 1997 Berry et al.
5708764 January 13, 1998 Borrel et al.
5742285 April 21, 1998 Ueda
5754174 May 19, 1998 Carpenter et al.
5761673 June 2, 1998 Bookman et al.
5764229 June 9, 1998 Bennett
5764238 June 9, 1998 Lum et al.
5781189 July 14, 1998 Holleran et al.
5793376 August 11, 1998 Tanaka et al.
5796402 August 18, 1998 Ellison-Taylor
5801703 September 1, 1998 Bowden et al.
5838313 November 17, 1998 Hou et al.
5838316 November 17, 1998 Arruza
5877741 March 2, 1999 Chee et al.
5877762 March 2, 1999 Young et al.
5883839 March 16, 1999 Wallon et al.
5920659 July 6, 1999 Iverson et al.
5933148 August 3, 1999 Oka et al.
5949409 September 7, 1999 Tanaka et al.
5978579 November 2, 1999 Buxton et al.
5999948 December 7, 1999 Nelson et al.
6005568 December 21, 1999 Simonoff et al.
6006231 December 21, 1999 Popa
6011562 January 4, 2000 Gagne et al.
6016145 January 18, 2000 Horvitz et al.
6031937 February 29, 2000 Graffagnino
6075543 June 13, 2000 Akeley
6076166 June 13, 2000 Moshfeghi et al.
6166748 December 26, 2000 Van Hook et al.
6167533 December 26, 2000 Potterveld et al.
6182129 January 30, 2001 Rowe et al.
6191797 February 20, 2001 Politis
6195664 February 27, 2001 Tolfa
6211890 April 3, 2001 Ohba
6246418 June 12, 2001 Oka
6266053 July 24, 2001 French et al.
6266430 July 24, 2001 Rhoads
6272484 August 7, 2001 Martin et al.
6272558 August 7, 2001 Hui et al.
6278448 August 21, 2001 Brown et al.
6304684 October 16, 2001 Niczyporuk et al.
6307574 October 23, 2001 Ashe et al.
6321314 November 20, 2001 Van Dyke
6369823 April 9, 2002 Ohba
6369830 April 9, 2002 Brunner et al.
6396520 May 28, 2002 Ording
6411301 June 25, 2002 Parikh et al.
6412021 June 25, 2002 Nguyen
6417865 July 9, 2002 Bou
6418439 July 9, 2002 Papierniak et al.
6421058 July 16, 2002 Parikh et al.
6424348 July 23, 2002 Parikh
6452600 September 17, 2002 Parikh et al.
6456290 September 24, 2002 Parikh et al.
6457034 September 24, 2002 Morein
6466218 October 15, 2002 Parikh et al.
6469714 October 22, 2002 Buxton et al.
6483524 November 19, 2002 Petchenkine et al.
6489963 December 3, 2002 Parikh et al.
6512522 January 28, 2003 Miller et al.
6526174 February 25, 2003 Graffagnino
6542160 April 1, 2003 Abgrall
6544295 April 8, 2003 Bodnar
6552732 April 22, 2003 Davis et al.
6571328 May 27, 2003 Liao et al.
6577317 June 10, 2003 Duluk, Jr. et al.
6580430 June 17, 2003 Hollis et al.
6583892 June 24, 2003 Tanaka
6590592 July 8, 2003 Nason et al.
6609977 August 26, 2003 Shimizu et al.
6614444 September 2, 2003 Duluk, Jr. et al.
6618048 September 9, 2003 Leather
6636214 October 21, 2003 Leather et al.
6639595 October 28, 2003 Drebin et al.
6664958 December 16, 2003 Leather et al.
6664962 December 16, 2003 Komsthoeft et al.
6666353 December 23, 2003 Yurkovic
6674438 January 6, 2004 Yamamoto et al.
6684369 January 27, 2004 Bernardo et al.
6691176 February 10, 2004 Narin et al.
6697074 February 24, 2004 Parikh et al.
6707462 March 16, 2004 Peercy et al.
6714201 March 30, 2004 Grinstein et al.
6715053 March 30, 2004 Grigor
6717599 April 6, 2004 Olano
6724403 April 20, 2004 Santoro
6734864 May 11, 2004 Abgrall
6741242 May 25, 2004 Itoh
6765592 July 20, 2004 Pletcher et al.
6788318 September 7, 2004 Chen
6806892 October 19, 2004 Plow et al.
6906720 June 14, 2005 Emberling et al.
6910000 June 21, 2005 Yedidia et al.
6911984 June 28, 2005 Sabella et al.
6931633 August 16, 2005 Vazquez et al.
6944829 September 13, 2005 Dando
6978418 December 20, 2005 Bain et al.
6993721 January 31, 2006 Rosin et al.
7016011 March 21, 2006 de Haan
7020887 March 28, 2006 Mooney et al.
7024381 April 4, 2006 Hastings et al.
7027055 April 11, 2006 Anderson et al.
7028264 April 11, 2006 Santoro et al.
7050955 May 23, 2006 Carmel et al.
7085994 August 1, 2006 Gvily
7103833 September 5, 2006 Sano et al.
7127713 October 24, 2006 Davis et al.
7174512 February 6, 2007 Martin et al.
7240300 July 3, 2007 Jaeger
7281202 October 9, 2007 Croney et al.
7315848 January 1, 2008 Pearse et al.
7543223 June 2, 2009 Rodriquez
7559034 July 7, 2009 Paperny et al.
7667582 February 23, 2010 Waldorf
7707251 April 27, 2010 Birch
7810021 October 5, 2010 Paxson
7945853 May 17, 2011 Kothari
7975231 July 5, 2011 Hasuike et al.
7987420 July 26, 2011 Kloba
8381110 February 19, 2013 Barger
8401899 March 19, 2013 Kauchak
9053080 June 9, 2015 Look
20010030647 October 18, 2001 Sowizral et al.
20010035885 November 1, 2001 Iron et al.
20010044812 November 22, 2001 Byrd
20020008719 January 24, 2002 Miyawaki et al.
20020023110 February 21, 2002 Fortin
20020033837 March 21, 2002 Munro
20020065949 May 30, 2002 Heaton
20020067418 June 6, 2002 I
20020072956 June 13, 2002 Willems et al.
20020089526 July 11, 2002 Buxton et al.
20020093516 July 18, 2002 Brunner et al.
20020099678 July 25, 2002 Albright et al.
20020118217 August 29, 2002 Fujiki
20020120673 August 29, 2002 Tolson et al.
20020123739 September 5, 2002 Graffagnino
20020129092 September 12, 2002 Tolson et al.
20020171682 November 21, 2002 Frank et al.
20020174003 November 21, 2002 Redmann et al.
20020174181 November 21, 2002 Wei
20020182571 December 5, 2002 McCormick et al.
20020186257 December 12, 2002 Cadiz et al.
20030009267 January 9, 2003 Dunsky et al.
20030023770 January 30, 2003 Barmettler et al.
20030020671 January 30, 2003 Santoro et al.
20030080995 May 1, 2003 Tenenbaum et al.
20030101046 May 29, 2003 Krasnov
20030123739 July 3, 2003 Graffagnino
20030124502 July 3, 2003 Chou
20030046316 March 6, 2003 Gergic et al.
20030146934 August 7, 2003 Bailey et al.
20030154239 August 14, 2003 Davis et al.
20030158975 August 21, 2003 Frank et al.
20030164831 September 4, 2003 Walls et al.
20030164862 September 4, 2003 Cadiz et al.
20030174136 September 18, 2003 Emberling et al.
20030174154 September 18, 2003 Yukie et al.
20030184552 October 2, 2003 Chadha
20030189597 October 9, 2003 Anderson et al.
20030191799 October 9, 2003 Araujo et al.
20030225740 December 4, 2003 Sexton et al.
20040032409 February 19, 2004 Girard
20040036711 February 26, 2004 Anderson
20040039934 February 26, 2004 Land et al.
20040039987 February 26, 2004 Coppin et al.
20040143823 July 22, 2004 Wei
20040179019 September 16, 2004 Sabella et al.
20040194020 September 30, 2004 Beda et al.
20040012626 January 22, 2004 Brookins
20040205589 October 14, 2004 Yoshioka
20040210833 October 21, 2004 Lerner
20040212640 October 28, 2004 Mann et al.
20040223003 November 11, 2004 Heirich et al.
20040225968 November 11, 2004 Look
20040215740 October 28, 2004 Frank et al.
20040255253 December 16, 2004 Marcjan
20040261012 December 23, 2004 Balsiger
20040261037 December 23, 2004 Ording et al.
20040261038 December 23, 2004 Ording et al.
20050001854 January 6, 2005 Schuster et al.
20050021935 January 27, 2005 Schillings et al.
20050022139 January 27, 2005 Gellman et al.
20050039144 February 17, 2005 Wada et al.
20050057497 March 17, 2005 Kawahara
20050060655 March 17, 2005 Gray et al.
20050060661 March 17, 2005 Kawahara et al.
20050088447 April 28, 2005 Hanggie et al.
20050088452 April 28, 2005 Hanggie et al.
20050091571 April 28, 2005 Leichtling
20050091576 April 28, 2005 Relyea et al.
20050102273 May 12, 2005 Gupta
20050108364 May 19, 2005 Callaghan et al.
20050010634 January 13, 2005 Henderson et al.
20050144563 June 30, 2005 Hough et al.
20050168471 August 4, 2005 Paquette
20050168476 August 4, 2005 Levene et al.
20050181349 August 18, 2005 Bardige et al.
20050188007 August 25, 2005 Warner
20050193368 September 1, 2005 Becker et al.
20050240857 October 27, 2005 Benedict et al.
20050243373 November 3, 2005 Silverbrook et al.
20050256940 November 17, 2005 Henderson et al.
20050268215 December 1, 2005 Battagin et al.
20050278651 December 15, 2005 Coe et al.
20050283734 December 22, 2005 Santoro et al.
20060035710 February 16, 2006 Festejo et al.
20060059422 March 16, 2006 Wu et al.
20060059423 March 16, 2006 Lehmann et al.
20060066631 March 30, 2006 Schorr
20060075141 April 6, 2006 Boxenhorn
20060090137 April 27, 2006 Cheng et al.
20060095443 May 4, 2006 Kumar et al.
20060123356 June 8, 2006 Sobeski et al.
20060136843 June 22, 2006 Shafron
20060136868 June 22, 2006 Celi et al.
20060140357 June 29, 2006 Gomez
20060206835 September 14, 2006 Chaudhri et al.
20060274086 December 7, 2006 Forstall et al.
20060277469 December 7, 2006 Chaudhri et al.
20060277481 December 7, 2006 Forstall et al.
20070038934 February 15, 2007 Fellman
20070044039 February 22, 2007 Amadio et al.
20070047901 March 1, 2007 Ando et al.
20070061724 March 15, 2007 Slothouber et al.
20070073641 March 29, 2007 Perry et al.
20070094418 April 26, 2007 Reisman
20070101146 May 3, 2007 Louch et al.
20070101279 May 3, 2007 Chaudhri et al.
20070101288 May 3, 2007 Forstall et al.
20070101291 May 3, 2007 Forstall et al.
20070101297 May 3, 2007 Forstall et al.
20070101433 May 3, 2007 Louch et al.
20070118813 May 24, 2007 Forstall et al.
20070130541 June 7, 2007 Louch et al.
20070162850 July 12, 2007 Adler et al.
20070177466 August 2, 2007 Ando et al.
20070203984 August 30, 2007 AlHusseini et al.
20070209013 September 6, 2007 Ramsey et al.
20070233736 October 4, 2007 Xiong et al.
20070266093 November 15, 2007 Forstall et al.
20080072157 March 20, 2008 Pally
20080109275 May 8, 2008 Cho
20080136822 June 12, 2008 Schorr et al.
20080163076 July 3, 2008 Reponen et al.
20080163102 July 3, 2008 Horiuchi
20080209008 August 28, 2008 Kim et al.
20080306933 December 11, 2008 Valliani et al.
20090007160 January 1, 2009 Wei
20090077158 March 19, 2009 Riley et al.
20100146410 June 10, 2010 Kreiner
20100211861 August 19, 2010 Onda
20110265136 October 27, 2011 Liwerant
20120221417 August 30, 2012 Riley
20130159833 June 20, 2013 Look
20130204620 August 8, 2013 Cross, Jr.
20140250369 September 4, 2014 Mitnick et al.
20150347609 December 3, 2015 Brust
Foreign Patent Documents
548586 June 1993 EP
0694879 January 1996 EP
1383080 January 2004 EP
0972273 March 2004 EP
98/045815 October 1998 WO
02/09039 January 2002 WO
04/027707 April 2004 WO
Other references
  • Gong et al., Web Visualization of Distributed Network Measurement System Based on HTML5, IEEE 2012, pp. 519-523.
  • Saund et al., Perceptually-Supported Image Editing of Text and Graphics, ACM 2003, pp. 183-192.
  • Jung et al., Annotating and Sketching on 3D Web Models, ACM 2002, pp. 95-102.
  • Johnson et al., A Scalability Study of Web-Native Information Visualization, Graphics Interface 2008, pp. 163-168.
  • Lin et al., “DENIM: Finding a Tighter Fit between Tools and Practice for Web Site Design”, ACM 2000, pp. 510-517.
  • Jern, “3D Data Visualization on the Web”, IEEE 1998, pp. 90-99.
  • Communication of the European Patent Office dated Oct. 1, 2010, for European Patent Application No. EP05766079.7, 5 pages.
  • Yee, “Peephole Displays: Pen Interaction on Spatially Aware Handheld Computers”, ACM 2003, pp. 1-8.
  • Duce et al., “Web 2D Graphics File Formats”, Google 2002, pp. 43-65.
  • Examiner's First Report on Austrailan Patent Application No. 2005267626, dated Feb. 10, 2010, 3 pages.
  • Nigay et al., “A Design Space for Multimodal Systems: Concurrent Processing and Data Fusion”, ACM Apr. 1993, pp. 172-178.
  • Billinghurst et al., “3D Palette: A Virtual Reality Content Creation Tool”, ACM 1997, pp. 155-156.
  • Linton et al., “Composing User Interfaces with InterViews”, IEEE 1989, pp. 8-22.
  • Stardock et al., “DesktopX General Guide”, Aug. 2000, [online] [retrieved on Jan. 31, 2008], retrieved from the internet <URL: http://www stardock.com/products/desktopx/docs/>, 28 pages.
  • Stardock et al., “Desktop WhitePaper”, Aug. 2000, [online] [retrieved on Jan. 31, 2008], retrieved from the internet <URL: http://www.stardock.net/media/whitepaper_desktopx.html>, 19 pages.
  • Stardock et al., “DesktopX Tutorial”, Aug. 2000, [online] [retrieved on Jan. 31, 2008], retrieved from the internet <URL: http://www.stardock.com/products/desktopx/tutorial.html>, 40 pages.
  • Stardock et al., “What Can It Do? Making Objects”, DesktopX Tutorial, 2001, [online] [retrieved on Apr. 11, 2008], retrieved from the internet <URL: http://web.archive.org/web/201101922825/http://www.stardock.com/products/desktopx/ . . . >, 6 pages.
  • Snippets Software et al., “Products: Overview”, Feb. 2002, [online] [retrieved on Feb. 5, 2008], retrieved from the internet <URL: http://web.archive.org/web/20020206061508/http://www.snippets.com/products/>, 2 pages.
  • “Convert just about Anything to Anything Else”, onlineconversion.com, Aug. 2000, [online] [retrieved on Jun. 22, 2008], retrieved from the internet <URL: http://web.archive.org/web/20000815055422/http://www.onlineconversion.com/>, 2 pages.
  • Rist et al., “Customizing Graphics for Tiny Displays of Mobile Devices”, Google 2002, pp. 260-268.
  • Microsoft Corporation, User's Guide Microsoft Windows and MS-Dos 6, 1993, pp. Cover-xvi and 112-121.
  • Microsoft Corporation, Microsoft Windows User's Guide for the Windows Graphical Environment, Version 3.0 for the MS-DOS or PC-DOS Operating System, 1990, Documet No. SY06851-0290, pp. cover-vi, 15-76, and 315-354.
  • Conner, D., et al., “Three-Dimensional Widgets”, ACM, 1992, pp. 183-231.
  • “Stardock News: DesktopX User Manual On-line”, 1999, 2003, [online] [retrieved May 11, 2007], retrieved from the internet <URL: http://www.stardock.com/newsitem.asp?id=538>, 2 pages.
  • “Comparison of Widget Engines”, Wikipedia, 2007 [online] [retrieved on Sep. 26, 2007], retrieved from the internet <URL http://en.wikipedia org/wiki/Comparison_of_widget_engines>, 5 pages.
  • “Dashboard blog”, Dec. 2003, [online] [retrieved on May 11, 2007], retrieved from the internet <URL: http://www.nat.org/dashboard/blog.php3>, 31 pages.
  • “DesktopX Whitepaper and Users Guide”, stardock.com, 1999, [online] [retrieved on May 14, 2007] retrieved from the internet <URL: http://www.stardock.net/media/whitepaper_desktopx.html>, 15 pages.
  • “Windows Sidebar”, Wikipedia, 2007 [online] [retrieved on May 29, 2007] retrieved from the internet <URL: http://en.wikipedia.org/wiki/Windows_Sidebar>, 2 pages.
  • Gruber, J., et al., “Dashboard vs. Konfabulator”, Daring Fireball, Jun. 2004, [online] [retrieved on May 11, 2007], retrieved from the internet <URL: http://daringfireball.net/2004/06/dashboard_vs_konfabulator>, 9 pages.
  • “Snippets Software Platform”, Snippet Software Inc., Jun. 2001, [online] [retrieved on Jun. 11, 2001], retrieved from the internet <URL: http://www.snippets.com/products>.
  • “Product Spotlight: Non-browser based portal solution from Snippets Software, Inc.”, Corporate Portal Letter, Oct. 2000, vol. 1, No. 10, 3 pages.
  • Altman, R. B., “Visual QuickStart Guide PowerPoint 2000/98, Applying Preset Animations”, ProQuest Safari Books, Peachpit Press, May 7, 1999, [online], retrieved from the internet <URL: http://proquest/safaribooksonline.com/0201354411>, 7 pages.
  • International Search Report and Written Opinion, PCT/US2005/022152, dated Jul. 10, 2006, 6 pages.
  • Archive of “Objects, Images and Applets”, W3C Recommendation, Feb. 10, 2003, [online] [archived by http://archive.org; retrieved on Apr. 13, 2006], retrieved from the internet <URL: http://web.archive.org/web/20030210154019/http://www.w3.org/TR/REC-html140/struct/objects.html>, 21 pages.
  • Ullenboom, C., “Java is auch eine insel”, 2003, [online] [retrieved on Apr. 13, 2006], retrieved from the internet <URL: http://www.galileocomputing.de/openbook/javainsel2/java_140000.htm#Xxx998138>, 3 pages.
  • Fried, Ina, “Developer Calls Apple's Tiger a Copycat”, CNET News.com, Jun. 28, 2004, [online] [retrieved on Jul. 1, 2004] retrieved from the internet <URL: http://zdnet.com.com/2102-1104_2-250692.html?tag=printthis>, 2 pages.
  • Fried, Ina, “For Apple's Tiger, the Keyword is Search”, CNET News.com, Jun. 28, 2004 [online] [retrieved on Jul. 1, 2004], retrieved from the internet <URL: http://zdnet.com.com/2102-1103_2-250346.html?tag=printthis>, 2 pages.
  • Haeberli, P., et al., “The Accumulation Buffer: Hardware Support for High-Quality Rendering”, Computer Graphics, Aug. 1990,vol. 24, No. 4, pp. 309-318.
  • International Search Report, PCT/US2005/006804, dated Jul. 27, 2005, 3 pages.
  • International Search Report, PCT/US2005/008805, dated Aug. 8, 2005, 3 pages.
  • Konfabulator, “Cupertino, Start Your Photocopiers”, [online] [retrieved on Jul. 1, 2004], retrieved from the internet <URL: http://www.konfabulator.com>, 1 page.
  • Konfabulator, “Konfabulator & Widget Basics”, [online] [retrieved on Jul. 1, 2004], retrieved from the internet <URL: http://www.konfabulator.com/info/basics.html>, 3 pages.
  • Konfabulator, “Screenshots”, [online] [retrieved on Jul. 1, 2004], retrieved from the internet <URL: http://www.konfabulator.com/info/screenshots.html>, 2 pages.
  • Konfabulator, “What is Konfabulator?”, [online] [retrieved on Jul. 1, 2004] retrieved from the internet <URL: http://www.konfabulator.com/info/>, 3 pages.
  • Notification of Transmittal of the International Search Report and Written Opinion of the International Searching Authority, PCT/US2005/022579, dated Jan. 27, 2006, 14 pages.
  • Puder, A., “Extending Desktop Applications to the Web”, ACM International Conference Proceedings Series, Proceedings of the 2004 International Symposium on Information and Communication Technologies, 2004, vol. 90, 6 pages.
  • Rochkind, M., et al., “Common Elements in Today's Graphical User Interfaces: The Good, the Bad, and the Ugly”, INTERCHI '93, ACM, Apr. 24-29, 1993, pp. 470-473.
  • Shiozawa, H., et al., “Perspective Layered Visualization of Collaborative Workspaces”, Proceedings of the International ACM SIGGROUP conference on Supporting Group Work Publisher, Nov. 1999, pp. 71-75.
  • Staples, L., “Representation in Virtual Space: Visual Convention in the Graphical User Interface”, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Apr. 1993, 6 pages.
  • Tang, J.C., et al., “ConNexus to Awarenex: extending Awareness to mobile Users”, SIGCHI '01, ACM, Mar. 31-Apr. 4, 2001, 8 pages.
  • Wardell, B., “Konfabulator for Windows”, Jan. 10, 2004, [online] [retrieved on Mar. 5, 2006], retrieved from the internet <URL: http://www.xpthemes.com/forums.asp?MID=19&CMID=19&AID=4472>, 6 pages.
  • “Writing a Desk Accessory”, Developer Connection, apple Computer, Inc., Jul. 3, 1996, [online] [retrieved on Jan. 3, 2006], retrieved from the internet <URL: http://developer.apple.com/documentation/mac/Devices/Devices-15>, 3 pages.
  • Akeley, K., et al., “Real-Time Graphics Architecture”, The OpenGL Graphics System, CS448 Lecture 15 [online], fall 2001, retrieved from the internet <URL: http://www.graphics.stanford.edu/cources/cs448a-01-fall>, 20 pages.
  • Cadiz, J.J. et al., “Slideshow: Providing Peripheral Awareness of Important Information”, Technical Report MSR-TR-2001-83, Sep. 14, 2001, Microsoft Corporation, Redmond, WA, 9 pages.
  • Elliott, C., “Programming Graphics Processors Functionally”, The Haskell Workshop, Sep. 22, 2004, 11 pages.
  • Nvidia, “Gg ˜Teaching Cg”, PowerPoint Presentation, Author and date unknown, 16 pages.
  • Segal, M., et al., “The OpenGL Graphics System: A Specification (Version 1.5)”, Silicon Graphics, Inc., Oct. 30, 2003, 334 pages.
  • Shantzis, M.A., “A Model for Efficient and Flexible Image Computing”, Computer Graphics Proceedings, Annual Conference Series, Jul. 24-29, 1994, Orlando, FL, pp. 147-154.
  • Van Gelder, A., et al., “Direct Volume Rendering with Shading Via Three-Dimensional Textures”, University of California, Santa Cruz, 1196, 9 pages.
  • Munzner, Exploring Large Graphs in 3D Hyperbolic Space, IEEE 1998, pp. 18-23.
  • Bederson et al., Jazz: An Extensible Zoomable User Interface Graphics Toolkit in Java, ACM 2000, pp. 171-180.
  • Sound et al., Percoptually-Supported image Editing of Text and Graphics, ACM 2003, 183-192.
Patent History
Patent number: 10387549
Type: Grant
Filed: Sep 27, 2016
Date of Patent: Aug 20, 2019
Patent Publication Number: 20170116162
Assignee: Apple Inc. (Cupertino, CA)
Inventors: Richard Williamson (Los Gatos, CA), David Hyatt (Mountain View, CA), John Louch (San Luis Obispo, CA)
Primary Examiner: Cong-Lac Huynh
Application Number: 15/277,941
Classifications
Current U.S. Class: Add On Item (e.g., Software Developed, Customized) (715/826)
International Classification: G06F 17/00 (20190101); G06F 17/22 (20060101); G06F 17/21 (20060101); G06T 11/20 (20060101); G06F 16/958 (20190101);