UNIVERSAL COLLABORATIVE PSEUDO-REALISTIC VIEWER

- RDV SYSTEMS LTD.

A computer-readable medium containing instructions for controlling an electronic device to perform a method of visualization, the method constituted of: loading a 3 dimensional (3D) scene comprising visual model data; rendering a first pseudo-realistic image of the loaded 3D scene responsive to a first view positional indicator; transmitting the rendered first pseudo-realistic image to at least two remote computing platforms; receiving from any of the at least two remote computing platform a scene control command; rendering a second pseudo-realistic image of the loaded 3D scene responsive to the received scene control command; and transmitting the rendered second pseudo-realistic image to the at least two remote computing platforms.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims priority from U.S. Provisional Patent Application 61/100,734 filed Sep. 28, 2008, entitled “Pseudo-Realistic Rendering of BIM Data Responsive to Positional Indicator”, the entire contents of which is incorporated herein by reference.

BACKGROUND

The invention relates generally to the field of visual modeling, and in particular to a method and apparatus providing collaborative pseudo-realistic rendering of a visual model responsive to user inputs.

Building information modeling is the process of generating and managing building data during its life cycle. Typically it uses three-dimensional, dynamic building modeling software to increase productivity in building design and construction. The term building design and construction is not limited to physical dwellings and/or offices, but is meant to additionally include any construction project including, without limitation, road and infrastructure projects. The process produces a Building Information Model (BIM), which as used herein comprises building geometry, spatial relationships, geographic information, and quantities and properties of building components, irrespective of whether we are dealing with a physical building or a general construction project including land development and infrastructure.

The use of interactive and dynamic 3D computer graphics is becoming prevalent in the computing world. Typically, 3D visualization applications provide photo-realistic results using techniques such as ray tracing, radiosity, global illumination and other shading, shadowing and light reflection techniques. Such 3D visualization applications provide a 3D generated model, without relationship to the existing environment.

U.S. patent application Ser. No. 11/538,103 to Elsberg et al, entitled “Method and Apparatus for Virtual Reality Presentation of Civil Engineering, Land Planning and Infrastructure”, published as US 2007/0078636 A1, the entire contents of which is incorporated herein by reference, is addressed to a computer implemented method of visualizing an infrastructure, in which the rendering is accomplished in cooperation with a material definition. Such a method allows for evaluating large scale designs in a virtual reality environment, in which the virtual reality rendering exhibits a pseudo-realistic image, defined herein as an image which comprises at least one of shading, texturing, illumination and shadowing based on real world parameters.

Rapid Design Visualization is a software application available from RDV Systems, Ltd. of Lod, ISRAEL, which enables any Civil 3D user to create a fully interactive visualization environment directly from their Civil 3D project. Civil 3D is a software BIM solution for the field of civil engineering available from Autodesk, Inc. of San Rafael, Calif. The Rapid Design Visualization software enables a Civil 3D designer to easily create drive through simulations, flyovers and interactive simulations for proposed roads, subdivisions, underground infrastructure, interchanges and many other complex land development projects. Such an interactive simulation enables a potential user, developer, or investor, to visualize a Civil 3D project in an office environment.

The above discussion has been primarily focused on BIM applications, however this is not meant to be limiting in any way, and is equally applicable to any visual model data.

Freewheel software available from Autodesk, Inc. of San Rafael, Calif., provides the ability to upload design data to a remote server. Any user allowed access can then utilize a variety of navigation tools to review the design. Advantageously, no software download is required, since the requirement to download and install software is an often unbridgeable barrier in today's IT environment. Disadvantageously, collaborative tools are not provided.

What is desired, and not provided by the prior art is a method and apparatus providing full active collaboration for any visual model, preferably without requiring a software download and installation.

SUMMARY

Accordingly, it is a principal object of the present invention to overcome at least some of the disadvantages of prior art collaborative visualization techniques. This is accomplished in certain embodiments by providing a server comprising a computer-readable medium containing instructions for controlling an electronic device to perform a method of visualization, the method comprising: loading a 3 dimensional (3D) scene comprising visual model data; rendering a first pseudo-realistic image of the loaded 3D scene responsive to a first view positional indicator; transmitting the rendered first pseudo-realistic image to at least two remote computing platforms; receiving from any of the at least two remote computing platforms a scene control command; rendering a second pseudo-realistic image of the loaded 3D scene responsive to the received scene control command; and transmitting the rendered second pseudo-realistic image to the at least two remote computing platforms.

Thus, the server is arranged to provide a full active collaborative interactive viewing experience between disparate devices. The term active collaboration is meant to include wherein various users may simultaneously interact with the 3D model, with interaction displayed to all of the various users. In an exemplary embodiment, one of the computing platforms is a cellular telephone, and another of the computing platforms is a portable computer. Either of the devices can provide input to the shared collaborative viewing experience. In one particular embodiment, a third computing platform is provided, the third computing platform arranged to generate the 3D scene internally responsive to positional indicator information.

In certain embodiments a computer-readable medium containing instructions for controlling an electronic device to perform a method of visualization is provided, the method comprising: loading a 3 dimensional (3D) scene comprising visual model data; rendering a first pseudo-realistic image of the loaded 3D scene responsive to a first view positional indicator; transmitting the rendered first pseudo-realistic image to at least two remote computing platforms; receiving from any of the at least two remote computing platforms a scene control command; rendering a second pseudo-realistic image of the loaded 3D scene responsive to the received scene control command; and transmitting the rendered second pseudo-realistic image to the at least two remote computing platforms.

In certain further embodiments the scene control command comprises a second view positional indicator different from the first view positional indicator. In certain yet further embodiments, the method further comprises: performing an analysis of at least one criteria of the visual model data responsive to the each of the first and second positional indicators; and transmitting at least one result of the performed analysis in concert with the respective transmitted rendered first and second pseudo-realistic images.

In certain embodiments the scene control command comprises one of turning off at least one object of the 3D scene, changing illumination of at least a portion of the 3D scene and changing a material type for at least one object of the 3D scene. In yet other certain embodiments the scene control command comprises a highlight indicator.

In certain embodiments the first pseudo-realistic image exhibits an adjustable field of view, and wherein the rendered first pseudo-realistic image presents a view frustum responsive to the first view positional indicator and the adjustable field of view. In other certain embodiments the rendering of the first and second pseudo-realistic image comprises at least two of shading, texturing, illumination and shadowing responsive to real time orientation information in respect to latitude, longitude and elevation.

In certain embodiments at least one of the transmitted rendered first pseudo-realistic image and the transmitted rendered second pseudo-realistic image comprises an omni-directional view. In other certain embodiments the visual model data is provided in a selectable one of a plurality of formats. In yet other certain embodiments the computer readable medium is embeddable into a web site.

Independently a server comprising a computing device and a communication module is enabled, the computing device arranged to: load a 3 dimensional (3D) scene comprising visual model data; render a first pseudo-realistic image of the loaded 3D scene responsive to a first view positional indicator; transmit the rendered first pseudo-realistic image via the communication module to at least two remote computing platforms; receive, via the communication module, from any of the at least two remote computing platform a scene control command; render a second pseudo-realistic image of the loaded 3D scene responsive to the received scene control command; and transmit the rendered second pseudo-realistic image to the at least two remote computing platforms.

In certain embodiments the scene control command comprises a second view positional indicator different from the first view positional indicator. In certain further embodiments the computing device is further arranged to: perform an analysis of at least one criteria of the visual model data responsive to each of the first and second positional indicators; and transmit at least one result of the performed analysis to the at least two remote computing platforms in concert with the respective transmitted rendered first and second pseudo-realistic images.

In certain embodiments the scene control command comprises one of: turning off at least one object of the 3D scene, changing illumination of at least a portion of the 3D scene and changing a material type for at least one of object of the 3D scene. In other certain embodiments the scene control command comprises a highlight indicator.

In certain embodiments the first pseudo-realistic image exhibits an adjustable field of view, and wherein the rendered first pseudo-realistic image presents a view frustum responsive to the first view positional indicator and the adjustable field of view. In other certain embodiments the rendering of the first and second pseudo-realistic image comprises at least two of shading, texturing, illumination and shadowing responsive to real time orientation information in respect to latitude, longitude and elevation.

In certain embodiments at least one of the transmitted rendered first pseudo-realistic image and the transmitted rendered second pseudo-realistic image comprises an omni-directional view. In other certain embodiments the visual model data is provided in a selectable one of a plurality of formats.

Independently a computer-readable medium containing instructions for controlling an electronic device to perform a method of visualization, is enabled, the method comprising: loading a 3 dimensional (3D) scene comprising visual model data; rendering a pseudo-realistic image of the loaded 3D scene responsive to a first view positional indicator, the pseudo-realistic image comprising an omni-directional view; and transmitting the rendered pseudo-realistic image to at least two remote computing platforms.

Additional features and advantages of the invention will become apparent from the following drawings and description.

BRIEF DESCRIPTION OF THE DRAWINGS

For a better understanding of the invention and to show how the same may be carried into effect, reference will now be made, purely by way of example, to the accompanying drawings in which like numerals designate corresponding elements or sections throughout.

With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice. In the accompanying drawings:

FIG. 1 illustrates a high level block diagram of a system providing universal collaborative visualization in accordance with an exemplary embodiment;

FIG. 2 illustrates a rendered pseudo-realistic image in which a plurality of highlights, or indicators, each associated with a particular device of FIG. 1 is depicted;

FIG. 3 illustrates a high level flow chart of a method of operation of the server of FIG. 1 to perform a method of visualization; and

FIG. 4 illustrates a high level flow chart of a method of operation of the server of FIG. 1 to perform a method of visualization comprising an omni-directional view on demand.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is applicable to other embodiments or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.

FIG. 1 illustrates a high level block diagram of a system 10 providing universal collaborative visualization in accordance with an exemplary embodiment, system 10 comprising: a server 20 comprising a processor 30, a memory 40, a communication module 50 and a 3D scene storage 60; a network 70; a mobile computing platform 80 comprising a processor 30, a communication module 50, a display 90 and a user input device 100; a real time position determining device 110; a computer 120 comprising a processor 30, a communication module 50, a display 90 and a user input device 100; and a mobile station 150 comprising a processor 30, a communication module 50, a display 90 and a user input device 100.

Processor 30 of server 20 is in communication with memory 40, 3D scene storage 60 and communication module 50 of server 20. Real time position determining device 110 is in communication with mobile computing platform 80, and in particular with processor 30 thereof. Processor 30 of mobile computing platform 80 is in communication with each of communication module 50, display 90 and user input device 100 thereof. Processor 30 of computer 120 is in communication with each of communication module 50, display 90 and user input device 100 thereof. Processor 30 of mobile station 150 is in communication with each of communication module 50, display 90 and user input device 100 thereof. In one embodiment, network 70 is a combination of a General Packet Radio Service (GPRS) and the Internet, however this is not meant to be limiting in any way. Network 70 generally comprises a communication network arranged to enable electronic communication between server 20 and each of mobile computing platform 80, computer 120 and mobile station 150, via the respective communication module 50. The action of each user input device 100 is preferably communicated to server 20, and particularly to processor 30 of server 20 via the respective communication module 50. In one non-limiting example, mobile station 150 comprises a cellular telephone, a personal digital assistant, or a hand held computer, each of which meets the definition of a computing device. Server 20 is illustrated as a stand alone server, however this is not meant to be limiting in any way. In an exemplary embodiment server 20 is implemented on a cloud computing platform, in which a dynamically scalable and often virtualized resource is provided as a service over the Internet. Preferably, operation of the method of server 20 is embeddable in a web site. Mobile station 150 is an example of a limiting viewing device, since mobile station 150 is incapable of rendering a pseudo-realistic view of a complex 3D scene.

In operation, responsive to a command received from a first one of mobile computing platform 80, computer 120 and mobile station 150, processor 30 of server 20 is operative responsive to computer readable instructions stored on memory 40 to load a 3D scene comprising virtual model data from 3D scene storage 60. The virtual model data is preferably supported in a plurality of formats. In one non-limiting embodiment, the 3D scene comprises Building Information Model (BIM) data. Processor 30 of server 20 is further operative to render a pseudo-realistic image of the loaded 3D scene, responsive to a first view positional indicator. In one particular embodiment, the first view positional indicator is a default positional indicator. Preferably, the rendered pseudo-realistic image exhibits an adjustable field of view, and the rendered pseudo-realistic image presents a view frustum responsive to the first view positional indicator. Preferably, the rendering of the pseudo-realistic image comprises at least two of shading, texturing, illumination and shadowing responsive to real time orientation information in respect to latitude, longitude and elevation. The rendered pseudo-realistic image is preferably stored in memory 40, is preferably further associated with a session ID, and is transmitted to the source of the received command via the respective communication modules 50 and network 70. For ease of understanding, we will hereinafter term the first one of mobile computing platform 80, computer 120 and mobile station 150 which transmitted the command the initiating client.

In one embodiment the 3D scene is rendered in accordance with the teachings of U.S. patent application Ser. No. 11/538,103 to Elsberg et al, entitled “Method and Apparatus for Virtual Reality Presentation of Civil Engineering, Land Planning and Infrastructure”, published as US 2007/0078636 A1, incorporated above by reference. In another embodiment the 3D scene is developed via photogrammetry, from existing architectural plans and land survey information, via light detecting and ranging (LIDAR) and/or from existing or developed geographic information system (GIS) data.

The initiating client is further provided with the session ID. In one embodiment, the initiating client shares the session ID with another one or more of mobile computing platform 80, computer 120 and mobile station 150, such as by e-mail, SMS or any other form of communication.

Any of mobile computing platform 80, computer 120 and mobile station 150, in addition to the initiating client, responsive to the received session ID, may link to server 20. In an exemplary embodiment, each of mobile computing platform 80, computer 120 and mobile station 150 is provided with the opportunity to download and install software which will enable on-board generation of the pseudo-realistic images, or alternatively to avoid installation of software. It is known to those skilled in the art that installation of software often requires privileges which may not be easily obtained by the average corporate user, and thus the ability to avoid installation of software while maintaining a collaborative viewing experience is particularly advantageous.

Server 20, responsive to the provided session ID received from any of mobile computing platform 80, computer 120 and mobile station 150, is operative to transmit the rendered pseudo-realistic image, as indicated above preferably stored in memory 40, associated with the provided session ID, to the source of the provided session ID. Thus, both the initiating client, and one or more additional clients are provided with the same rendered pseudo-realistic image.

A user of any of mobile computing platform 80, computer 120 and mobile station 150, connected via the provided session ID, now interacts via the respective user input device 100, thus requesting a second view positional indicator, different than the first view positional indicator. Alternatively, or additionally, a user of any of mobile computing platform 80, computer 120 and mobile station 150, connected via the provided session ID, now interacts via the respective user input device 100 to request a scene control, such as one or more of turning off at least one object of said 3D scene, changing the transparency of at least one object of said 3D scene, changing illumination of at least a portion of said 3D scene and changing a material type for at least one object of said 3D scene. In an exemplary embodiment, turning of at least one element comprises adjusting the transparency of the at least one element to 100%. Adjusting the transparency, or turning off of at least one element, provides the user with extraordinary visual perception. Alternatively, sight conditions of the display 3D scene may be adjusted so as to provide a simulation of reduced visibility conditions such as fog.

Responsive to the received request for a second view positional indicator or other scene control, processor 30 of server 20 is operative to render a second pseudo-realistic image of the loaded 3D scene. The second rendered pseudo-realistic image is preferably stored in a cache portion of memory 40, preferably further associated with the session ID, and is transmitted via the respective communication modules 50 and network 70 to each of mobile computing platform 80, computer 120 and mobile station 150, connected via the provided session ID for display on the respective display 90. In one embodiment, the second rendered pseudo-realistic image is immediately transmitted only to the initiating client, and one or more additional clients are maintained in a loop requesting at each iteration if there is an updated image in the memory 40 associated with the present session ID. In the event that there is an updated image in the memory 40 associated with the present session ID, processor 30 of server 20 is arranged to transmit the updated image via the respective communication module 50 and network 70.

In the event that any of mobile computing platform 80, computer 120 and mobile station 150 have downloaded and installed software which enables on-board generation of the pseudo-realistic images, as described above, and is available from, inter alia, RDV Systems of Lod, Israel, the 3D scene loaded by processor 30 of server 20 is further provided to the one of mobile computing platform 80, computer 120 and mobile station 150 that downloaded and installed the software, thus the pseudo-realistic view is locally generated by the local processor 30 for display on the respective display 90. In one embodiment the above mentioned received request for a second view positional indicator or other scene control is transmitted by processor 30 of server 20 to the one or more of mobile computing platform 80, computer 120 and mobile station 150 which has downloaded and installed the image generation software. Thus, receipt of the actual rendered pseudo-realistic image is not required by the one or more of mobile computing platform 80, computer 120 and mobile station 150 which has downloaded and installed the image generation software, since the pseudo-realistic image is generated locally by the respective processor 30 for display on the respective display 90.

In an embodiment wherein real time positioning determining device 110 is supplied, changes in the physical position of real time positioning determining device 110 are transmitted to server 20 via the respective communication module 50 and network 70 as a scene control command. Alternatively, only the actual position of real time positioning determining device 110 is highlighted, or otherwise indicated on the rendered pseudo-realistic image transmitted by server 20 and received by any of mobile computing platform 80, computer 120 and mobile station 150 connected by the provided session ID.

In one embodiment, the pseudo-realistic image is rendered further responsive to chronographic information associated with real time positioning determining device 110. Thus, the rendered pseudo-realistic image exhibits shadowing responsive to a calculated position of the sun; correct for the latitude, longitude, elevation and local time received from real time position determining device 110.

In one embodiment the 3D scene comprises at least one dynamic object, whose motion may optionally be set to be fixed. Thus, in a non-limiting example, a vehicle having a predetermined speed of travel may be displayed in the pseudo-realistic scene. In the event that the user's actual travel, in the real world, as indicated by real time positioning determining device 110, matches the predetermined speed the user will be seen to be maintaining pace in relation to the dynamic object vehicle.

Preferably, any one or more of mobile computing platform 80, computer 120 and mobile station 150 connected by the provided session ID may indicate a point of interest via the respective user input device 100. The coordinates of the indicated point of interest are transmitted to server 20 via the respective communication module 50 and network 70, and the rendered pseudo-realistic image is updated with a highlight, or other indicator. Preferably the highlight or other indicator is unique to the session ID/particular one of mobile computing platform 80, computer 120 and mobile station 150 connected by the provided session ID, such as a by providing a particular color for each of mobile computing platform 80, computer 120 and mobile station 150 connected by the provided session ID.

Referring to FIG. 2, the above is further illustrated, in which a plurality of highlights, or indicators, 200, each associated with a particular one of mobile computing platform 80, computer 120 and mobile station 150 are illustrated. Each of the indicators are different, with a first one of indicators 200 being marked by a vertical/horizontal cross hatch, a second one of indicators 200 being marked by a diagonal cross hatch and a third one of indicators 200 being marked by a diagonal pattern.

In one non-limiting embodiment, the coordinates of the indicated point of interest are transmitted to server 20 along with information regarding the width and height of the respective display 90 of the one of mobile computing platform 80, computer 120 and mobile station 150 connected by the provided session ID indicating the point of interest via the respective user input device 100. The coordinates are associated with the highlight, or other indicator, and transmitted to each of mobile computing platform 80, computer 120 and mobile station 150 connected by the provided session ID, which are operative to interpolate the position of the highlight, or other indicator based on the relative dimensions of the respective display 90 associated with the one of mobile computing platform 80, computer 120 and mobile station 150 indicating the point of interest and the dimensions of the display 90 of the receiving one of mobile computing platform 80, computer 120 and mobile station 150. Thus, the highlight or other indicator is displayed at the appropriate location of the rendered pseudo-realistic image at the receiving one of mobile computing platform 80, computer 120 and mobile station 150 connected by the provided session ID indicating the point of interest.

In one non-limiting embodiment, the transmitted rendered pseudo-realistic image is transmitted with an omni-directional view. In particular, server 20 generates an image that can be applied to a spherical mapping, which represents the view of the rendered pseudo-realistic image in all directions as seen from the current view position indicator. The transmitted images are mapped by a respective processor 30 of any of mobile computing platform 80, computer 120 and mobile station 150 connected by the provided session ID on a sphere surrounding the current view position. Thus, any of mobile computing platform 80, computer 120 and mobile station 150 connected by the provided session ID are able to rotate the direction of the view and look at any portion without requiring further downloaded information.

In one non-limiting embodiment, processor 30 of server 20 is further operative to perform an analysis of at least one criteria of the visual model data loaded from 3D scene storage 60, responsive to each view positional indicator. Processor 30 of server 20 is further operative to transmit at least one result of the performed analysis in concert with the transmitted rendered pseudo-realistic images. In a non-limiting embodiment, the criteria is sight distance, as described in published U.S. Patent Application S/N US 2008/0021680 published Jan. 24, 2008 to Elsberg et al., entitled “Method and Apparatus for Evaluating Sight Distance”, the entire contents of which is incorporated herein by reference.

The above system 10 thus enables active collaboration in real time between disparate users, which may be geographically widely separated. In one embodiment, voice communication, or chat communication, is further enabled between the actively collaborating users, to enable real time communication and collaboration. Preferably, the one of mobile computing platform 80, computer 120 and mobile station 150 initiating the scene command, is further provided with an indication of the success of transmission to all other users.

FIG. 3 illustrates a high level flow chart of the method of operation of server 20 of FIG. 1 to perform a method of visualization. In stage 1000 a 3D scene comprising virtual model data is loaded. Preferably, the 3D scene is stored in one of a large plurality of formats. In one non-limiting embodiment, the 3D scene comprises Building Information Model (BIM) data.

In stage 1010 a first pseudo-realistic image of the loaded 3D scene is rendered, responsive to a first view positional indicator. In one particular embodiment, the first view positional indicator is a default positional indicator. Preferably, the rendered pseudo-realistic image exhibits an adjustable field of view, and the rendered pseudo-realistic image presents a view frustum responsive to the first view positional indicator. Preferably, the rendering of the pseudo-realistic image comprises at least two of shading, texturing, illumination and shadowing responsive to real time orientation information in respect to latitude, longitude and elevation, as described further below in relation to stage 1070.

In one embodiment the 3D scene is rendered in accordance with the teachings of U.S. patent application Ser. No. 11/538,103 to Elsberg et al, entitled “Method and Apparatus for Virtual Reality Presentation of Civil Engineering, Land Planning and Infrastructure”, published as US 2007/0078636 A1, incorporated above by reference. In another embodiment the 3D scene is developed via photogrammetry, from existing architectural plans and land survey information, via light detecting and ranging (LIDAR) and/or from existing or developed geographic information system (GIS) data.

In stage 1020, the rendered first pseudo-realistic image of stage 1010 is transmitted to at least two remote computing platforms. In one non-limiting embodiment, as described above, coordination between the various remote computing platforms is accomplished responsive to a session ID. Optionally, in the event that one or more of the computing platforms has downloaded imaging rendering software, the loaded 3D scene of stage 1000 is further transmitted.

In stage 1030, a scene control command, generated at any of the at least two remote computing platforms, is received. The scene control command is in one embodiment a second view positional indicator, different from the first view positional indicator of stage 1010. In another embodiment, the scene control command comprises one or more of turning off at least one object of said 3D scene, changing the transparency of at least one object of said 3D scene, changing illumination of at least a portion of said 3D scene and changing a material type for at least one object of said 3D scene. In an exemplary embodiment, turning off at least one element comprises adjusting the transparency of the at least one element to 100%. Adjusting the transparency, or turning off of at least one element, provides the user with extraordinary visual perception. Alternatively, sight conditions of the display 3D scene may be adjusted so as to provide a simulation of reduced visibility conditions such as fog. In yet another embodiment, the scene control command comprises a highlight indicator.

In stage 1040, responsive to the received scene control command of stage 1030, a second pseudo-realistic image is rendered, in a manner in all respects similar to that described above in relation to stage 1010. In stage 1050, the rendered second pseudo-realistic image of stage 1040 is transmitted to the at least two remote computing platforms. In one non-limiting embodiment, as described above, coordination between the various remote computing platforms is accomplished responsive to a session ID. Optionally, in the event that one or more of the computing platforms has downloaded imaging rendering software, the received scene control command is further transmitted, so that computing platforms which have downloaded the image rendering software can locally render the second pseudo-realistic image.

In optional stage 1060, an analysis of at least one criterion of the visual model data loaded comprising the 3D scene of stage 1000 is performed, responsive to each view positional indicator, or scene control command. Preferably, at least one result of the performed analysis is transmitted in concert with the transmitted rendered pseudo-realistic images.

In optional stage 1070 the rendering of the pseudo-realistic image comprises at least two of shading, texturing, illumination and shadowing responsive to real time orientation information in respect to latitude, longitude and elevation. In one embodiment, at least one of the remote computing platforms is associated with a real time positioning device and/or chronographic information, and the rendering is responsive to information from at least one of the real time positioning device and chronographic information. Alternatively, the chronographic information is developed in a separate device from the computing platform associated with the real time positioning device. The rendered pseudo-realistic image exhibits shadowing responsive to a calculated position of the sun; correct for the latitude, longitude, elevation and local time received from the real time positioning device.

In optional stage 1080, at least one of the transmitted rendered images of stage 1020 and 1050 is transmitted with an omni-directional view. In particular, an image is generated that can be applied to a spherical mapping, which represents the view of the rendered pseudo-realistic image in all directions as seen from the current view position indicator. Thus, any of the at least two remote mobile computing platforms are able to rotate the direction of the view and look at any portion without requiring further downloaded information.

In optional stage 1090, the above method is preferably further embeddable in a web site.

FIG. 4 illustrates a high level flow chart of a method of operation of server 20 of FIG. 1 to perform a method of visualization comprising an omni-directional view on demand.

In stage 2000 a 3D scene comprising virtual model data is loaded. Preferably, the 3D scene is stored in one of a large plurality of formats. In one non-limiting embodiment, the 3D scene comprises Building Information Model (BIM) data.

In stage 2010 a first pseudo-realistic image of the loaded 3D scene is rendered, responsive to a first view positional indicator. In one particular embodiment, the first view positional indicator is a default positional indicator. Preferably, the rendered pseudo-realistic image exhibits an adjustable field of view, and the rendered pseudo-realistic image presents a view frustum responsive to the first view positional indicator. Preferably, the rendering of the pseudo-realistic image comprises at least two of shading, texturing, illumination and shadowing responsive to real time orientation information in respect to latitude, longitude and elevation, as described further above in relation to stage 1070 of FIG. 3.

In one embodiment the 3D scene is rendered in accordance with the teachings of U.S. patent application Ser. No. 11/538,103 to Elsberg et al, entitled “Method and Apparatus for Virtual Reality Presentation of Civil Engineering, Land Planning and Infrastructure”, published as US 2007/0078636 A1, incorporated above by reference. In another embodiment the 3D scene is developed via photogrammetry, from existing architectural plans and land survey information, via light detecting and ranging (LIDAR) and/or from existing or developed geographic information system (GIS) data.

In stage 2020, the rendered first pseudo-realistic image of stage 2010 is transmitted to a remote computing platform. In one non-limiting embodiment, the remote computing platform comprises one of a mobile computer, a fixed workstation, a cellular telephone, a personal digital assistant and a hand held computer. In stage 2030, a scene control command generated by the remote compute platform is received. The scene control command is in one embodiment a second view positional indicator, different from the first view positional indicator of stage 2010. In another embodiment, the scene control command comprises one or more of turning off at least one object of said 3D scene, changing the transparency of at least one object of said 3D scene, changing illumination of at least a portion of said 3D scene and changing a material type for at least one object of said 3D scene. In an exemplary embodiment, turning off at least one element comprises adjusting the transparency of the at least one element to 100%. Adjusting the transparency, or turning off of at least one element, provides the user with extraordinary visual perception. Alternatively, sight conditions of the display 3D scene may be adjusted so as to provide a simulation of reduced visibility conditions such as fog. In yet another embodiment, the scene control command comprises a highlight indicator.

In stage 2040, responsive to the received scene control command of stage 2030, a second pseudo-realistic image is rendered, in a manner in all respects similar to that described above in relation to stage 2010. In stage 2050, the rendered second pseudo-realistic image of stage 2040 is transmitted to the remote computing platform.

In optional stage 2060, an animation request is received from the remote computing platform. In one non-limiting embodiment, the animation request is associated with a predefined animated camera path associated with the rendered 3D scene of stage 2000. The animation is generated and streamed to the remote computing platform. Preferably the animation is pre-generated and indexed with positional information as a relation to timestamps in the animation.

In stage 2070, a request for an omni-directional view is received from the remote computing platform associated with a particular view positional indicator. In a non-limiting embodiment, in which optional stage 2060 is implemented, the view positional indicator is determined responsive to a stop, or pause, request received at a position in the streamed animation of stage 2060.

In stage 2080, an omni-directional view is generated at the particular view positional indicator of stage 2070, by rendering a plurality of views representing the 3D scene in all directions at the particular view positional indicator of stage 2070. In stage 2090, the rendered plurality of views representing the generated omni-directional view at the particular view positional indicator of stage 2070 is transmitted to the remote computing platform.

Thus, certain of the present embodiments enable an electronic device to perform a method of visualization, the method comprising: loading a 3 dimensional (3D) scene comprising visual model data; rendering a first pseudo-realistic image of the loaded 3D scene responsive to a first view positional indicator; transmitting the rendered first pseudo-realistic image to at least two remote computing platforms; receiving from any of the at least two remote computing platforms a scene control command; rendering a second pseudo-realistic image of the loaded 3D scene responsive to the received scene control command; and transmitting the rendered second pseudo-realistic image to the at least two remote computing platforms.

The server is thereby arranged to provide a collaborative interactive viewing experience between disparate devices. In an exemplary embodiment, one of the computing platforms is a cellular telephone, and another of the computing platforms is a portable computer. Either of the devices can provide input to the shared collaborative viewing experience. In one particular embodiment, a third computing platform is provided, the third computing platform arranged to generate the 3D scene internally responsive to positional indicator information.

It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination.

Unless otherwise defined, all technical and scientific terms used herein have the same meanings as are commonly understood by one of ordinary skill in the art to which this invention belongs. Although methods similar or equivalent to those described herein can be used in the practice or testing of the present invention, suitable methods are described herein.

All publications, patent applications, patents, and other references mentioned herein are incorporated by reference in their entirety. In case of conflict, the patent specification, including definitions, will prevail. In addition, the materials, methods, and examples are illustrative only and not intended to be limiting.

It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather the scope of the present invention is defined by the appended claims and includes both combinations and sub-combinations of the various features described hereinabove as well as variations and modifications thereof, which would occur to persons skilled in the art upon reading the foregoing description.

Claims

1. A computer-readable medium containing instructions for controlling an electronic device to perform a method of visualization, the method comprising:

loading a 3 dimensional (3D) scene comprising visual model data;
rendering a first pseudo-realistic image of said loaded 3D scene responsive to a first view positional indicator;
transmitting said rendered first pseudo-realistic image to at least two remote computing platforms;
receiving from any of said at least two remote computing platform a scene control command;
rendering a second pseudo-realistic image of said loaded 3D scene responsive to said received scene control command; and
transmitting said rendered second pseudo-realistic image to said at least two remote computing platforms.

2. The computer-readable medium of claim 1, wherein said scene control command comprises a second view positional indicator different from said first view positional indicator.

3. The computer-readable medium of claim 2, wherein said method further comprises:

performing an analysis of at least one criterion of said visual model data responsive to each of said first and second positional indicators; and
transmitting at least one result of said performed analysis in concert with said respective transmitted rendered first and second pseudo-realistic images.

4. The computer-readable medium of claim 1, wherein said scene control command comprises one of turning off at least one object of said 3D scene, changing illumination of at least a portion of said 3D scene and changing a material type for at least one of object of said 3D scene.

5. The computer-readable medium of claim 1, wherein said scene control command comprises a highlight indicator.

6. The computer-readable medium of claim 1, wherein said first pseudo-realistic image exhibits an adjustable field of view, and wherein said rendered first pseudo-realistic image presents a view frustum responsive to said first view positional indicator and said adjustable field of view.

7. The computer-readable medium of claim 1, wherein said rendering of said first and second pseudo-realistic image comprises at least two of shading, texturing, illumination and shadowing responsive to real time orientation information in respect to latitude, longitude and elevation.

8. The computer-readable medium of claim 1, wherein at least one of said transmitted rendered first pseudo-realistic image and said transmitted rendered second pseudo-realistic image comprises an omni-directional view.

9. The computer-readable medium of claim 1, wherein said visual model data is provided in a selectable one of a plurality of formats.

10. The computer-readable medium of claim 1, wherein the computer readable medium is embeddable into a web site.

11. A server comprising a computing device and a communication module, said computing device arranged to:

load a 3 dimensional (3D) scene comprising visual model data;
render a first pseudo-realistic image of said loaded 3D scene responsive to a first view positional indicator;
transmit said rendered first pseudo-realistic image via said communication module to at least two remote computing platforms;
receive, via said communication module, from any of said at least two remote computing platforms a scene control command;
render a second pseudo-realistic image of said loaded 3D scene responsive to said received scene control command; and
transmit said rendered second pseudo-realistic image to said at least two remote computing platforms.

12. The server of claim 11, wherein said scene control command comprises a second view positional indicator different from said first view positional indicator.

13. The server of claim 12, wherein said computing device is further arranged to:

perform an analysis of at least one criterion of said visual model data responsive to each of said first and second positional indicators; and
transmit at least one result of said performed analysis to said at least two remote computing platforms in concert with said respective transmitted rendered first and second pseudo-realistic images

14. The server of claim 11, wherein said scene control command comprises one of turning off at least one object of said 3D scene, changing illumination of at least a portion of said 3D scene and changing a material type for at least one object of said 3D scene.

15. The server of claim 11, wherein said scene control command comprises a highlight indicator.

16. The server of claim 11, wherein said first pseudo-realistic image exhibits an adjustable field of view, and wherein said rendered first pseudo-realistic image presents a view frustum responsive to said first view positional indicator and said adjustable field of view.

17. The server of claim 11, wherein said rendering of said first and second pseudo-realistic images comprises at least two of shading, texturing, illumination and shadowing responsive to real time orientation information in respect to latitude, longitude and elevation.

18. The server of claim 11, wherein at least one of said transmitted rendered first pseudo-realistic image and said transmitted rendered second pseudo-realistic image comprises an omni-directional view.

19. The server of claim 11, wherein said visual model data is provided in a selectable one of a plurality of formats.

20. A computer-readable medium containing instructions for controlling an electronic device to perform a method of visualization, the method comprising:

loading a 3 dimensional (3D) scene comprising visual model data;
rendering a pseudo-realistic image of said loaded 3D scene responsive to a first view positional indicator, said pseudo-realistic image comprising an omni-directional view; and
transmitting said rendered pseudo-realistic image to at least two remote computing platforms.
Patent History
Publication number: 20110169826
Type: Application
Filed: Sep 29, 2009
Publication Date: Jul 14, 2011
Applicant: RDV SYSTEMS LTD. (Lod)
Inventors: Nathan Elsberg (Modiin), Alex Hazanov (Katzir)
Application Number: 13/120,721
Classifications
Current U.S. Class: Three-dimension (345/419)
International Classification: G06T 15/00 (20110101);