When you write a C++ program, you write it in some editor (MSVC++, XCode, emacs, Notepad, Textedit, etc). Then you compile it. If you're tools are really disjoint, you use gcc to compile directly on the command line. If you have an Integrated Development Environment (XCode, MSVC++), you hit a key to explicitly compile. Now, in MSVC++, you can hit F7 to compile and then hit F5 to run. OR you can hit F5 to run, in which case MSVC++ will compile first and then, if the compile is successful, it will run. All of this is apparent to the programmer because the compilation takes a lot of time.
There's a bit of hand-waving in the following, but I've tried to be accurate... The compilation process can be broken down as
Parsing (analyzing the text of each .cpp file to create a tree of commands from the flat string)
Compiling (translation of each parse tree into assembly instructions and saving that as a .o file)
Linking (taking the assembly instructions from several individual .o files and combining them into a single .exe file, with jump instructions patched with addresses for the various subroutine calls)
Optimizing (looking over the entire .exe file and removing parts that were duplicate among the various .o files, among many many many more optimizations)
LabVIEW is a compiled language, but our programmers never sit and wait 30 minutes between fixing their last wire and seeing their code run. Why do you not see this time sink in LabVIEW?
Parsing time = 0. LabVIEW has no text to parse. The tree of graphics is our initial command tree. We keep this tree up to date whenever you modify any aspect of the block diagram. We have to... otherwise you wouldn't have an error list window that was continuously updated... like C++, you'd only get error feedback when you actually tried to run. C# and MSVC# does much the same "always parsed" work that LV does. But they still pay a big parse penalty at load time.
Compile time = same as C++, but this is a really fast step in any language, believe it or not. LabVIEW translates the initial command tree into a more optimized tree, iteratively, applying different transforms, until we arrive at assembly instructions.
Linking time = not sure how ours compares to C++.
Optimizing time = 0 in the development environment. We compile each VI to stand on its own, to be called by any caller VI. We don't optimize across the entire VI Hierarchy in the dev environment. Big optimizations are only done when you build an EXE, because that's when we know the finite set of conditions under which your VIs will be called.