Method to quantitativley analyze a model
A method of quantitatively analyzing results of a model. At least one image of a model is generated in a post-processor. A calibration mechanism of known dimensions is generated in the post-processor. The calibration mechanism is read into an analysis software. The image is read into the analysis software. The image from the post-processor is analyzed in the analysis software quantitatively using the calibration mechanism.
This application claims the benefit of U.S. Provisional Application No. 60/550,479, filed Mar. 5, 2004 and U.S. Provisional Application No. 60/550,490 filed Mar. 5, 2004.
FIELD OF THE INVENTIONThe present invention relates generally to quantitative analysis of images from a post-processor, and more particularly to quantitative analysis of images from a post-processor using a calibration mechanism in the analysis software.
BACKGROUND OF THE INVENTIONComputer simulations of motion, e.g., using FEA, have long been used to model and predict the behavior of systems, particularly dynamic systems. Such systems utilize mathematical formulations to calculate structural volumes under various conditions based on fundamental physical properties. Various methods are known to convert a known physical object into a grid, or mesh, for performing finite element analysis, and various methods are known for calculating interfacial properties, such as stress and strain, at the intersection of two or more modeled physical objects.
Use of computer simulations such as computer aided modeling in the field of garment fit analysis is known. Typically, the modeling involves creating a three-dimensional (hereinafter “3D”) representation of the body, such as a woman, and a garment, such as a woman's dress, and virtually representing a state of the garment when the garment is actually put on the body. Such systems typically rely on geometry considerations, and do not take into account basic physical laws. One such system is shown in U.S. Pat. No. 6,310,627, issued to Sakaguchi on Oct. 30, 2001.
Another field in which 3D modeling of a human body is utilized is the field of medical device development. In such modeling systems, geometry generators and mesh generators can be used to form a virtual geometric model of an anatomical feature and a geometric model of a candidate medical device. Virtual manipulation of the modeled features can be output to stress/strain analyzers for evaluation. Such a system and method are disclosed in WO 02/29758, published Apr. 11, 2002 in the names of Whirley, et al.
Further, U.S. Pat. No. 6,810,300, issued to Woltman, et al. Oct. 26, 2004, discloses a method of designing a product for use on a body using a preferred product configuration.
While the methods of designing products using computer simulations of motion are well known, typically the analysis of these methods are bound by the limited capabilities intrinsic in a post-processor. For example, post-processors are unable to automatically measure areas in slice planes between two arbitrary surfaces. Furthermore, measurements of distances are typically reliant on distances between nodes in a simulation. Existing image analysis software has been developed with a wide range of capabilities deficient in post-processors. However, the ability to directly couple post-processing software with the image analysis software is unkown. More specifically there is the need to allow for the passing of critical information from one software to another.
SUMMARY OF THE INVENTIONA method to quantitatively analyze results of a model is disclosed. The method comprises the steps of:
-
- generating at least one image of said model in a post-processor;
- generating a calibration mechanism of known dimensions in said post-processor;
- reading said calibration mechanism into an analysis software;
- reading said image into said analysis software; and
- analyzing said image from said post-processor in said analysis software quantitatively using said calibration mechanism.
A plurality of images may be generated in the post-processor. The model may be a product being worn on, in or adjacent to a body. The product may be an absorbent article. The absorbent article may be a sanitary napkin, pantiliner, incontinent pad, tampon, diaper, and breast pad. The calibration mechanism may be a calibration image, such as a box. The analysis software may be image analysis software.
Additionally, a method of analyzing physical test results in a virtual environment is disclosed. The method comprises the steps of:
-
- replicating at least one physical specimen in digital form to define a series of points;
- reading said points into a post-processor;
- generating at least one image from said series of points in said post-processor;
- generating a calibration mechanism of known dimensions in said post-processor;
- reading said calibration mechanism into an analysis software;
- reading said image into said analysis software; and
- analyzing said image from said post-processor in said analysis software quantitatively using said calibration mechanism.
The step of converting said series of points into a format that can be read into a post-processor is carried out after the step of replicating at least one physical specimen in digital form to define a series of points. The physical specimen is a product capable of being worn on, in or adjacent to a body. The product may be an absorbent article, such as a sanitary napkin, pantiliner, incontinent pad, tampon, diaper, and breast pad.
The step of aligning said series of points with at least a second series of points can be carried out after the step of replicating at least one physical specimen in digital form to define a series of points.
Furthermore, a method for calculating a spacial relationship between at least two objects is disclosed. The said method comprises the steps of:
-
- providing a model;
- generating results by running said model;
- reading said model results into a post-processor;
- generating at least one image of said model in a post-processor;
- generating a calibration mechanism of known dimensions in said post-processor;
- reading said calibration mechanism into an image analysis software;
- reading said image into said image analysis software;
- analyzing said image from said post-processor in said analysis software quantitatively using said calibration mechanism; and
- calculating the spacial relationship between said at least two objects using the quantitative analysis of the step of analyzing said image from said post-processor in said analysis software quantitatively using said calibration mechanism.
At least one of the objects can be a human body and at least one of said objects can be a product being worn on, in or adjacent to a body. The spacial relationship can be an area, volume or a distance between the objects.
BRIEF DESCRIPTION OF THE DRAWINGS
Referring now to the drawings and in particular to
In step 24 a plurality of images are generated in the post-processor. There is no limit on the number of images than can be generated in the post-processor. The model may be a product, such as an absorbent article, worn on, in or adjacent to a human body. The absorbent articles may be sanitary napkins, pantiliners, incontinent pads, tampons, diapers, and breast pads.
In step 26 the calibration mechanism may be a calibration image, such as a box. Other calibration images include but are not limited to bars, lines, parallel lines, rectangles, circles, triangles, unique shapes created for specialized applications and other conventional shapes.
In step 28 the calibration mechanism is preferably read into image analysis software.
Examples of suitable post-processing software include but are not limited to ABAQUS CAE (ABAQUS Inc., Pawtucket, R.I.), LSPrePost (Livermore Software Technology Corporation, Livermore Calif.), Hyperview (Altair Engineering, Troy, Mich.) Fieldview (Intelligent Light, Rutherford, N.J.), and EnSight (Computational Engineering International, Apex, N.C.).
When developing products such as sanitary napkins it is desirable to understand the fit of the product as it relates to the closeness of the product to the human body. One approach to understanding the fit of the product as it relates to the closeness of the product to the human body is to measure the gap between the product and the body at select locations.
In step 42, model/simulation results are generated. A process to arrive at the simulation results is described in U.S. Pat. No. 6,810,300 issued Oct. 26, 2004 to Woltmann et al., or commonly-assigned co-pending application Ser. No. 60/550,479 filed Mar. 5, 2004 in the name of Anast et al.
The model results are loaded/inputed/read into post-processing software, step 43. Examples of suitable post-processing software include but are not limited to ABAQUS CAE (ABAQUS Inc., Pawtucket, R.I.), LSPrePost (Livermore Software Technology Corporation, Livermore Calif.), and Hyperview (Altair Engineering, Troy, Mich.).
One known capability of post-processing software is the ability to use a repeated set of commands to drive a series of steps in the software, called scripting, with the repeated set of commands commonly called a script. In one such embodiment, an LSPrePost script can be used to visualize the simulation results of a product against a body at a series of different locations and angles in space, see
While the generation of images from a simulation is well known, the ability to couple such images into image analysis software was unknown. More specifically there is the need to allow for the passing of critical information from one software to another. This is accomplished by the generation of a calibration mechanism, step 46. In one such embodiment, the calibration mechanism is created in LSPrePost in which a cross sectional 2 dimensional image of a box of known dimensions is generated and saved to a file format, shown in
Additional images in the post-processer can be generated at the same level of magnification (zoom) and so share the common calibration factors and aspect ratio. It is possible to repeat this process such that different angles or different zooms are considered.
Other forms of calibration might include producing other reference objects for example, a set of parallel lines or a bar, using a fixed predefined pixel size, saving the image data directly in real world coordinates, or passing the calibration parameters as an output text file. Alternatively, one can imagine using an inherent or artificial object within a given product performance model for such a purpose. Examples include a simulation being run with a square or cube embedded in the simulation as an artificial object, or using a fixed reference internal to a body such as femur width as an inherent object.
In addition, various predetermined colors (digital grays scales in the example but any other colors could be used) are assigned to each element in the simulation, for example the body skin is black (zero gray scale), the pad edge is a shade of gray (128 gray scale), etc. The differentiation of materials based on the grayscale level provides a means to separate the components in the resulting 2D images during the image analysis steps described next. The output images are saved as a common graphics format file, in this example a PNG. Other forms of data output could be used, such as TIF or JPEG image format files, text file ASCII representation of the data, a binary format raw data file, or a 3D image format such as VRML or stereolithography.
Once the series of image data files and calibration mechanism are generated, they are imported/inputed/read into image analysis software, steps 48 and 50. Examples of suitable image analysis software include but are not limited to MatLab (The Mathworks, Nattick Mass.), Optimas (Media Cybernetics, Silver Spring Md.) and ImagePro (Media Cybernetics, Silver Spring Md.). Other data analysis software packages would also be suitable to import and process this type of data, provided the data files are compatible with the software with either a built in reader or a custom programmed reader typical of what can be done within MatLab for uncommon data files. MatLab contains a PNG reader and is used to import the LSPrePost images directly into MatLabs working space. Once the images are read into the software they are analyzed, step 52. A quantitative data report is provided from the analysis, step 54. The data is interpreted and correlated with consumers, step 56.
In one example, a custom image processing script was written in MatLab to: 1) read all the images from LSPrePost, 2) calibrate the images, 3) measure the pad-body gap area, and 4) save the results to a graphical and a text file representation for review.
1) Calibration. The calibration image,
2) For each calibrated image, the extent of the pad is identified for limiting the width of the analysis region. This can be accomplished in a number of ways. One procedure defines the minimum and maximum×coordinate of the particular gray scale associated with the pad, as described previously.
3) For each image, a vertical line scanning algorithm is used to find the lower body surface and the upper product surface. In sub-step, 3-1) image pixels in between these vertical points and within the defined analysis region, step 2), are accepted for inclusion in the gap area. Two addition sub-steps are used to fill in any gap areas missed scanning vertically. Sub-step 3-2) scans horizontally, finding pixels between the gap boundary and any pixels identified above. Sub-step 3-3) scans vertically again in a manner similar to 3-2) to more finely identify remaining gap pixels. Alternative methods, generally known as seed fill methods could equally well be used. Products that have holes or slits in the upper surfaces cause some trouble in this analysis, causing the pixels selection to leak into the interior of the product (not desirable) and thus seed fill routines will fail badly. To overcome this problem, one may limit the leakage by allowing the operator to remove sub-steps 3-2) and 3-3) from the analysis and note this correction in the results file. A more suitable approach would be to connect holes in the pad surface within the image using appropriate image processing techniques. The calibrated pixel areas for the above steps are summed to determine the gap area.
4) A report text file is generated that records the Product name, software version, computer platform, time and date of analysis, location of the source images, calibration factors, image names, gap areas in mm2, and the selection steps used for each image. In addition, a copy is generated of each input PNG image with an addition of the gap area being colored or shaded. This provides a visual record of the gap definition as shown in
A method 70 for analyzing physical test results in a virtual environment is shown in
A sanitary napkin or feminine pad is placed on a human female body and a first cast of the sanitary napkin on the human female is made, step 72. The first cast is removed and a second cast of the human female body is made, step 74. In a preferred execution the second cast is in close proximity and most preferably in direct contact with the body. The casts form the physical specimens.
The first and second casts are replicated in digital form to define a series of points, steps 76 and 78. The series of points can be connected to form a series of lines or a surface. In one example, a virtual surface of a product or panty is created using a 3D digitizing arm (MicroScribe/Immersion Corporation, San Jose, Calif., USA). The digitizing arm is connected to a computer equipped with a software program that supports modeling via reverse engineering such as Rhinoceros (Robert McNeel & Associates, Seattle, Wash.). A calibration process orients the digitizing arm in the real world with the coordinate system in the modeling software and is described with the equipment operating instructions.
The digitizer uses XYZ coordination from a stylus on the digitizer arm to create a 3D wire frame model of the surface. This is accomplished by moving the stylus across the surface and capturing points. In one such embodiment, points can be captured along an axis of the surface; the points being taken along a series of sequential lines that are spaced across the surface; the number of lines and points within each line is determined by the level of detail to be captured. These sequential lines can be lofted within the software to generate the 3D surface. The Rhinoceros software offers a variety of file formats to save this 3D surface as. The file can be saved as a stereolithography file in an ASCII format.
When two or more series of points are replicated in digital form it is often desirable that they be aligned with respect to one another, step 80. This alignment process can be done manually, based on the digital surface profile, or with additional physical data.
In one embodiment reference markers are placed on the body and transferred to the cast during the casting process. Any number of reference markers may be used and the location of the markers may be positioned as desired. The reference markers are separately digitized as a series of marker points and saved as a text file. The reference markers associated with each series of digitized points can be used to align the two series of points with respect to each other using techniques such as least squared methods, residual minimization and the like. The two series of aligned points are saved together in a file in the aligned position.
While the above process is performed for the surface of a product and for the surface of a body, it is understood that the two surfaces could correlate to any variety of surfaces including pad to pad, body to body, pad to undergarment, undergarment to body, and such. It is also understood that only at least two or more surfaces could be considered in such analysis. In addition a mannequin could be used to interact with the product, and the digital representation of the mannequin used versus the digitized product, the mannequin data resulting from digitizing the mannequin as described for a real body above, or directly from digital data from real humans used to manufacture the mannequin.
Digitized models of the body and product can be obtained from digitizing the casts as above or could alternatively be captured directly from the body or stabilized product using any number of digitizing instruments, for example an Inspeck (Montreal, Canada) Capturer optical non-contact digitizer. One typical common output of these instruments is a stereolithography file as previously described. The body or products could also result in part from a surface used in a simulation as previously discussed, in mesh form or in another file format, or any combination of such between real digitized surface and virtually created or analyzed surface.
The series of points are read directly into a post-processor, step 82. When there are two or more aligned series of points, they can also be read directly into a post-processor. In some cases it is necessary to convert the series of points into a format that can be read into a post-processor. In one such case a stereolithography file of the aligned surfaces is read directly into LSPrePost, shown in
A known capability of post-processing software is the ability to create a series of images and save the images to a file format such as a JPEG file, step 84. Another well known feature is the ability to generate a cross sectional 2 dimensional image, called a slice, through any combination of objects.
A calibration mechanism is generated in step 86. In one such embodiment, the calibration mechanism is created in LSPrePost in which a cross sectional 2 dimensional image of a box of known dimensions is generated and saved to a file format, shown in
In addition, various predetermined colors (digital grays scales in the example but any other colors could be used) are assigned to each series of points, for example the body is black (zero gray scale), the pad is a shade of gray (128 gray scale), etc. The differentiation of materials based on the grayscale level provides a means to separate the components in the resulting 2D images during the image analysis steps described next. The output images are saved as a common graphics format file, in this example a PNG.
Once the series of image data files and calibration mechanism are generated, they are imported/inputed/read into image analysis software, steps 88 and 90. Once the images are read into the software they are analyzed, step 92. A quantitative data report is provided from the analysis, step 94. The data is interpreted and correlated with consumers, step 96.
In one example, a custom image processing script was written in MatLab to: 1) read all the images from LSPrePost, 2) calibrate the images, 3) measure the pad-body gap area, and 4) save the results to a graphical and a text file representation for review.
1. Calibration. The calibration image,
2. For each calibrated image, e.g.,
3. For each image, a vertical line scanning algorithm is used to find the lower body surface and the upper product surface. In sub-step, 3-1) image pixels in between these vertical points and within the defined analysis region, step 2), are accepted for inclusion in the gap area. Two addition sub-steps are used to fill in any gap areas missed scanning vertically. Sub-step 3-2) scans horizontally, finding pixels between the gap boundary and any pixels identified above. Sub-step 3-3) scans vertically again in a manner similar to 3-2) to more finely identify remaining gap pixels. Alternative methods, generally known as seed fill methods could equally well be used. Products that have holes or slits in the upper surfaces cause some trouble in this analysis, causing the pixels selection to leak into the interior of the product (not desirable) and thus seed fill routines will fail badly. To overcome this problem, one may limit the leakage by allowing the operator to remove sub-steps 3-2) and 3-3) from the analysis and note this correction in the results file. A more suitable approach would be to connect holes in the pad surface within the image using appropriate image processing techniques. The calibrated pixel areas for the above steps are summed to determine the gap area.
4. A report text file is generated that records the Product name, software version, computer platform, time and date of analysis, location of the source images, calibration factors, image names, gap areas in mm2, and the selection steps used for each image. In addition, a copy is generated of each input PNG image with an addition of the gap area being colored or shaded. This provides a visual record of the gap definition as shown in
In a separate embodiment, instead of considering the product fit against the body, one can measure a series of characteristics of a pad fit against an undergarment. This is of particular utility when considering the performance of sanitary napkins or feminine care products having wings. The method 120 of calculating a spatial relationship between at least two objects is shown in
In step 122, model/simulation results are generated. A process to arrive at the simulation results is described in U.S. Pat. No. 6,810,300 issued Oct. 26, 2004 to Woltmann et al., or commonly-assigned co-pending application Ser. No. 60/550,479 filed Mar. 5, 2004 in the name of Anast et al.
In one example, results from a model of a feminine protection pad with wings applied to a panty using a virtual hand are generated,
The model results are loaded/inputed/read into a post-processing software, step 123. Examples of suitable post-processing software include but are not limited to ABAQUS CAE (ABAQUS Inc., Pawtucket, R.I.), LSPrePost (Livermore Software Technology Corporation, Livermore Calif.), and Hyperview (Altair Engineering, Troy, Mich.).
One known capability of post-processing software is the ability to use a repeated set of commands to drive a series of steps in the software, called scripting, with the repeated set of commands commonly called a script. In one such embodiment, an LSPrePost script can be used to visualize the simulation results of a product against a panty at a series of different locations and angles in space, see
A calibration mechanism is generated in step 126. In one such embodiment, the calibration mechanism is created in LSPrePost in which a cross sectional 2 dimensional image of a box of known dimensions is generated and saved to a file format, shown in
In addition, various predetermined colors (digital grays scales in the example but any other colors could be used) are assigned to each element in the simulation, for example the outline of the undergarment is black (zero gray scale), the pad is a shade of gray (128 gray scale), etc. The differentiation of materials based on the grayscale level provides a means to separate the components in the resulting 2D images during the image analysis steps described next. The output images are saved as a common graphics format file, in this example a PNG.
Once the series of image data files and calibration mechanism are generated, they are imported/inputed/read into image analysis software, steps 128 and 130. Once the images are read into the software they are analyzed, step 132. A quantitative data report is provided from the analysis, step 134. The data is interpreted and correlated with consumers, step 136.
In one such example, a custom image processing script was written in MatLab to 1) read all the images including the calibration mechanism from LSPrePost, 2) calibrate the images using the calibration mechanism, 3) composite the needed images as required by each measurement in a list of measurements, 4) calculate each measurement, and 5) save the results to a graphical and a text file representation for review. Each measurement is described in the steps below.
1) A text file is created with a header describing the version of the analysis code used, the date and time of analysis, the computer platform used to run the analysis, and the location of the source images.
2) Calibration. The calibration image,
3) Adhesive patch Analysis. This requires just a single image of the adhesive patches, see
4) Adhesive patch gap analysis. The image of
5) Wing Area and Gap Analysis. A single image of the wings only is used as shown in
6) Panty Elastic Wing Gap. A composite image of the panty elastic edge and wing is used here, see
7) Panty Elastic Length. Two images are needed a) panty elastic edge only, see
8) The adhesive patch gaps, wing gap, and panty elastic gaps are tabulated vs. MD position and appended to the text report.
9) The calibration factors (X and Y directions) for the conversion of pixels into distance and areas in mm or mm2 is appended to the text report for documentation and the text file is closed.
10) All the graphical plots are programmatically placed in one figure and the figure written to storage as a JPEG file. An example results file is provide in
The above method is not limited to virtual model images, but could equally well be used with images generated from digitized models of real physical prototypes, applied on real garments by real humans, additionally but not necessarily involving real bodies. Digitized models could include 3D geometry as described in method 70 or by taking digital pictures of the product in place. Calibration of digital pictures would entail adding a known length object in the image field of view, such as a precision rule as routinely practiced in the art, and using manual or image analysis techniques to locate and calibrate the image based on the known calibration marks of the object. Equally any object of known dimension could be used as a calibration source, including the products or garments themselves provided known features are not distorted in the image. Identification of features, using image analysis techniques, within the image could be accomplished in a number of ways, the goal of which is to provide sufficient contrast to isolate features of interest. These include but are not limited to: 1) manually pre-indicating the features using a highlighting method such as colored ink markers or paint, 2) using colored raw materials and/or colored garments to provide sufficient contrast, 3) using fluorescent dyes or native material fluorescence (for example adhesives naturally fluoresce) and UV illumination, 4) edge extraction, edge and/or pattern correlation or similar image analysis techniques, or 5) any combination of the above techniques. This feature identification can then be coupled with the procedures outlined in the previous example above to provide quantified output, in a manner and for use as discussed in the example.
The above described processes can be performed over a range of products, bodies, garments, usage conditions, quantifying the results for each case. In doing so, a population of statistical measurements will be created which can be analyzed by known statistical techniques such as design of experiments, linear regression, significant difference and optimization to name a few.
The above described processes can be used to improve product performance of existing products, to design new products, evaluate new concepts, and optimize design. Furthermore, an initial design can be analyzed using one of the methods above, the analysis results can be used to iterate the design and subsequently repeated. This enables a rapid development cycle for product design.
All documents cited in the Detailed Description of the Invention are, in relevant part, incorporated herein by reference; the citation of any document is not to be construed as an admission that it is prior art with respect to the present invention.
While particular embodiments of the present invention have been illustrated and described, it would be obvious to those skilled in the art that various other changes and modifications can be made without departing from the spirit and scope of the invention. It is therefore intended to cover in the appended claims all such changes and modifications that are within the scope of this invention.
Claims
1. A method of quantitatively analyzing results of a model comprising the steps of:
- a) generating at least one image of said model in a post-processor;
- b) generating a calibration mechanism of known dimensions in said post-processor;
- c) reading said calibration mechanism into an analysis software;
- d) reading said image into said analysis software; and
- e) analyzing said image from said post-processor in said analysis software quantitatively using said calibration mechanism.
2. The method of claim 1, wherein a plurality of images are generated in said post-processor.
3. The method of claim 1, wherein said model is a product being worn on, in or adjacent to a body.
4. The method of claim 3, wherein said product is an absorbent article.
5. The method of claim 4, wherein said absorbent article is selected from the group consisting of sanitary napkins, pantiliners, incontinent pads, tampons, diapers, and breast pads.
6. The method of claim 1, wherein said calibration mechanism is a calibration image.
7. The method of claim 6, wherein said calibration image is a box.
8. The method of claim 1, wherein said analysis software is image analysis software.
9. A method of analyzing physical test results in a virtual environment comprising the steps of:
- a) replicating at least one physical specimen in digital form to define a series of points;
- b) reading said points into a post-processor;
- c) generating at least one image from said series of points in said post-processor;
- d) generating a calibration mechanism of known dimensions in said post-processor;
- e) reading said calibration mechanism into an analysis software;
- f) reading said image into said analysis software; and
- g) analyzing said image from said post-processor in said analysis software quantitatively using said calibration mechanism.
10. The method of claim 9, wherein the step of converting said series of points into a format that can be read into a post-processor is carried out after step a).
11. The method of claim 9, wherein the physical specimen is of a product capable of being worn on, in or adjacent to a body.
12. The method of claim 11, wherein said product is an absorbent article.
13. The method of claim 12, wherein said absorbent article is selected from the group consisting of sanitary napkins, pantiliners, incontinent pads, tampons, diapers, and breast pads.
14. A method of analyzing physical test results in a virtual environment comprising the steps of:
- a) replicating at least one physical specimen in digital form to define a series of points
- b) aligning said series of points with at least a second series of points;
- c) reading said aligned points into a post-processor;
- d) generating at least one image from said aligned points in said post-processor;
- e) generating a calibration mechanism of known dimensions in said post-processor;
- f) reading said calibration mechanism into an analysis software;
- g) reading said image into said analysis software; and
- h) analyzing said image from said post-processor in said analysis software quantitatively using said calibration mechanism.
15. The method of claim 14, wherein the step of converting said series of points into a format that can be read into a post-processor is carried out after step a).
16. The method of claim 14, wherein at least one of said series of points respresents a product being worn on a body.
17. The method of claim 16, wherein said product is an absorbent article.
18. The method of claim 17, wherein said absorbent article is selected from the group consisting of sanitary napkins, pantiliners, incontinent pads, tampons, diapers, and breast pads.
19. A method for calculating a spacial relationship between at least two objects, said method comprising the steps of:
- a) providing a model,
- b) generating model results by running said model,
- c) reading said model results into a post-processor,
- d) generating at least one image of said model in the post-processor,
- e) generating a calibration mechanism of known dimensions in said post-processor,
- f) reading said calibration mechanism into an image analysis software,
- g) reading said image into said image analysis software,
- h) analyzing said image from said post-processor in said analysis software quantitatively using said calibration mechanism, and
- i) calculating the spacial relationship between said at least two objects using the quantitative analysis of step h).
20. The method of claim 19, wherein at least one of said objects is a human body and at least one of said objects is a product being worn on, in or adjacent to a body.
21. The method of claim 19, wherein said spacial relationship is an area in at least one of said images between at least two of said objects.
22. The method of claim 19, wherein said spacial relationship is a volume between at least two of said objects.
23. The method of claim 19, wherein said spacial relationship is a distance between at least two points on said objects.
24. The method of claim 23, wherein at least two distances are calculated and capable of being plotted in a graph.
25. The method of claim 24, wherein said graph is of a profile through time, space, distance, or location.
26. The method of claim 19, wherein at least one of said objects is a sanitary napkin and at least one of said objects is a woman's undergarment.
Type: Application
Filed: Mar 4, 2005
Publication Date: Dec 1, 2005
Inventors: John Anast (Fairfield, OH), Matthew Macura (Mariemont, OH), Bruce Lavash (West Chester, OH), Marianne Brunner (Cincinnati, OH), Tana Kirkbride (Cincinati, OH)
Application Number: 11/071,917