Jump to content

Is LabVIEW a programming environment, vs Doom

Recommended Posts

I've seen traces of very old discussions about how to classify LabVIEW, so I assume the subject is well known and opinions are strong.

Though I didn't really find any comprehensive discussion, which is a bit surprising. The discussion seem to always lean towards whether there is really a compiler in LabVIEW  - and yes there is, though it prepares only small chunks of code linked together by the LVRT.


Today I looked at the trailer of Doom Ethernal and that made me notice interesting thing - if LabVIEW is a programming environment, maybe Doom should be classified as one too?

Graphics cards are the most powerful processors in today PCs. They can do a lot of multi-threaded computations, very fast and with large emphasis on concurrency. Do do that, they prepare a small programs, ie. in a C-like shader language if the graphics API is OpenGL (we call them shaders as originally they were used for shades and simple effects; but now they're full fledged programs which handle geometry, collisions and other aspects of the game). Then, a user mode library commonly known as Graphics Driver, compiles that code into ISA assembly for specific card model, and sends to Execution Units of the graphics card. Some shaders are static, others are dynamic - generated during gameplay and modified on the go.

So, in Doom, like in LabVIEW:

- You influence the code by interacting with a graphics environment using mouse and keyboard

- There is a compiler which prepares machine code under the hood, and it's based on LLVM (at least one of major GFX card manufacturers uses LLVM in their drivers)

- There is a huge OS-dependent shared library which does the processing of the code (LVRT or 3D driver)

- The code gets compiled in real time as you go

- There is large emphasis on concurrent programming, the code is compiled into small chunks which work in separate threads


You could argue that the user actions in Doom might not allow to prepare all elements of the real programming language - but we really don't know. Maybe they do. Maybe you can ie. force a loop added to the code by a specific movement at specific place. I often read that many arguments against LabVIEW are caused by people don't really understanding the G Language, having little experience with programming in it. Maybe it's the same with Doom - if you master it in a proper way, you can generate any code clause you want. Like LabVIEW, Doom is a closed source software with no documented formats.


Link to post
Share on other sites

To make comparisons with games, you might be better off looking at the blueprint editor which is part of Unreal Engine.  Feels a lot like LabVIEW.

Not controlled from within the game, but it's a lot closer to the kind of thing you're trying to infer here.

Link to post
Share on other sites
12 hours ago, Mefistotelis said:

though it prepares only small chunks of code linked together by the LVRT.

You mean like how a Java compiler prepares only small chunks of code linked together by the JVM? ;)

Link to post
Share on other sites
17 minutes ago, JKSH said:

You mean like how a Java compiler prepares only small chunks of code linked together by the JVM? ;)

There are similarities between Labview and Java, but there are also considerable differences:

- Labview compiles to native machine code, Java compiles to universal platform-independent Java Bytecode - or in other words, LabVIEW is not virtualized

- In Java, program flow is completely within the bytecode, while in LabVIEW the LVRT does most of the work, only calling small sub-routines from user data


I guess the threads being created and data transferred by LVRT instead of user code can be considered another level of virtualization? On some level what Java does is similar - it translates chunks of bytecode to something which can be executed natively, and JRE merges such "chunks".

Maybe the right way to phrase it is - LabVIEW has virtualized program flow but native user code execution, while Java just provides a Virtual Machine and gives complete control to the user code inside.


Link to post
Share on other sites
21 hours ago, Mefistotelis said:

There are similarities between Labview and Java, but there are also considerable differences:

I feel like you're stretching here to make similarities seem to exist when tey do not.

Your argumentation for the LVRT is obviously flawed. If that were the case, every program which accesses Win32 functions is virtualised? No. It isn't. It's linking between actual compiled machine code. It is not platform-independent, it is compiled for the current machine architecture.

So is VisualC++ vistualised because there are VC++ runtimes installed on Windows?

To paraphrase "the Princess Bride".


  • Haha 1
Link to post
Share on other sites

You seem to have quoted wrong line - this one does not relate to virtualization nor any of what you wrote.

No, VC++ programs are not virtualized - neither in CPU architecture area, nor in program flow area. So you are right with your VC++ characterization, but  how exactly is that an argument here?

If you're going after testing my definition:

On 3/12/2020 at 2:42 PM, Mefistotelis said:

LabVIEW has virtualized program flow but native user code execution

Then in VC++, you write main() function yourself. You call sub-functions yourself. These calls are compiled into native code and executed directly by assembly "call" command (or equivalent, depending on architecture). Your code, compiled to native assembly, controls the execution.

In LabView, you still run it on real CPU, which has "call" instructions - but there are no "call" lines in the compiled part. These are simple blocks which process input into output, and the program flow is completely not there. It is controlled by LVRT, and LabVIEW "pretends" for you, that the CPU works differently - it creates threads, and calls your small chunks based on some conditions, like existence of input data. It creates an environment where the architecture seem to mimic what we currently have in graphics cards - hence the initial post (though I know, it was originally mimicking PLL logic, complex GPUs came later). This is not how CPUs normally work. In other words, it VIRTUALIZES the program flow, detaching it from native architecture.


Link to post
Share on other sites
6 hours ago, Mefistotelis said:

If you're going after testing my definition:

You haven't offered a definition. You've made some statements using already-used words with a meaning which is (I think) different than you are trying to apply it. Hence my meme.

LVRT does NOT organise program flow. That's done by the compiled code of the VI directly which is implementing the program flow.  I do not know where you get your idea from.  Again, I feel you have misunderstood what the LV Run-Time does. It is really comparable to the C++ Run-Time installations which exist on practically every windows system. So for me, if you agree that VC++ is not virtualised, by extension, you also agree that LabVIEW is not virtualised.


Link to post
Share on other sites

I got the idea from reverse engineering LV. I can look at it when it works. I can look at the assembly chunks extracted from VI files. Can you share the source of your idea?

As I see it, LVRT vs. MS STDC++ is definitely not comparable. Long list of differences. Both are shared libraries, both provide some API which implements some better or worse defined standard, both were compiled in VC++ - and that's all they have in common.

For the meme, it is botched. The original was less direct, which was part of the comedy.

It is possible that we do have different idea on what virtualization means though.

Link to post
Share on other sites

Great explanation.

15 hours ago, Rolf Kalbermatter said:

It's very easy to jump to conclusions from looking at a bit of assembly code in the LabVIEW runtime engine but that does not usually mean that those conclusions are correct.

Yeah, what you wrote matches my findings and clears the wrong conclusions.

So some actions which are performed by the linker in other languages, are performed by the loader in LabVIEW, and LabVIEW has its own loader built into LVRT, skipping OS loader (OS loader is only used to load LVRT).

After the initial linking done by the loader, execution is fully native.

This also explains why I didn't run across any list of relocations.


Link to post
Share on other sites
  • 2 weeks later...

A programming language exists in any Turing complete environment. 

Magic:The Gathering has now published enough cards to become Turing complete

You can watch such a computer be executed by a well-formed program.

People might not like programming in any given language. That's fine -- every language has its tradeoffs, and the ones we've chosen for G might not be a given person's cup of tea. But to claim G isn't a language is factually false. G has the facility to express all known models of computation. QED.

  • Like 1
Link to post
Share on other sites
On 3/16/2020 at 5:28 PM, Rolf Kalbermatter said:

For the rest the LabVIEW execution of code is much more like a C compiled executable than any Virtual Machine language like Java or .Net's IL bytecode, as the compiled code in the VIs is fully native machine code. Also the bytecode is by nature address independent defined while machine code while possible to use location independent addresses, usually has some absolute addresses in there.

This is not really true. I mean, it's kind of true, insofar as LV executes assembly level instructions, not byte code. But it is also misleading. 

LabVIEW doesn't ever get to a deep call stack. Suppose you have one program where Alpha VI calls Beta VI calls Gamma VI calls Delta VI and a second program which is just Omega VI.  Now you run both and record the deepest call stack that any thread other than the UI thread ever achieves. What you'll find is that both programs have the same maximum stack depth. That's because all VIs are compiled into separate "chunks" of code. When a VI starts running, the address of any chunk that doesn't need upstream inputs is put into the execution queue. Then the execution threads start dequeuing and running each chunk. When a thread finishes a chunk, part of that execution will decrement the "fire count" of downstream chunks. When one of those downstream chunk's fire count hits zero, it gets enqueued. The call stack is never deeper than is needed to do "dequeue, call the dequeued address"... about depth 10 (there's some start up functions at the entry point of every exec thread). 

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...

Important Information

By using this site, you agree to our Terms of Use.