Mefistotelis Posted March 12, 2020 Report Share Posted March 12, 2020 I've seen traces of very old discussions about how to classify LabVIEW, so I assume the subject is well known and opinions are strong. Though I didn't really find any comprehensive discussion, which is a bit surprising. The discussion seem to always lean towards whether there is really a compiler in LabVIEW - and yes there is, though it prepares only small chunks of code linked together by the LVRT. Today I looked at the trailer of Doom Ethernal and that made me notice interesting thing - if LabVIEW is a programming environment, maybe Doom should be classified as one too? Graphics cards are the most powerful processors in today PCs. They can do a lot of multi-threaded computations, very fast and with large emphasis on concurrency. Do do that, they prepare a small programs, ie. in a C-like shader language if the graphics API is OpenGL (we call them shaders as originally they were used for shades and simple effects; but now they're full fledged programs which handle geometry, collisions and other aspects of the game). Then, a user mode library commonly known as Graphics Driver, compiles that code into ISA assembly for specific card model, and sends to Execution Units of the graphics card. Some shaders are static, others are dynamic - generated during gameplay and modified on the go. So, in Doom, like in LabVIEW: - You influence the code by interacting with a graphics environment using mouse and keyboard - There is a compiler which prepares machine code under the hood, and it's based on LLVM (at least one of major GFX card manufacturers uses LLVM in their drivers) - There is a huge OS-dependent shared library which does the processing of the code (LVRT or 3D driver) - The code gets compiled in real time as you go - There is large emphasis on concurrent programming, the code is compiled into small chunks which work in separate threads You could argue that the user actions in Doom might not allow to prepare all elements of the real programming language - but we really don't know. Maybe they do. Maybe you can ie. force a loop added to the code by a specific movement at specific place. I often read that many arguments against LabVIEW are caused by people don't really understanding the G Language, having little experience with programming in it. Maybe it's the same with Doom - if you master it in a proper way, you can generate any code clause you want. Like LabVIEW, Doom is a closed source software with no documented formats. Quote Link to comment
shoneill Posted March 12, 2020 Report Share Posted March 12, 2020 To make comparisons with games, you might be better off looking at the blueprint editor which is part of Unreal Engine. Feels a lot like LabVIEW. Not controlled from within the game, but it's a lot closer to the kind of thing you're trying to infer here. Quote Link to comment
JKSH Posted March 12, 2020 Report Share Posted March 12, 2020 12 hours ago, Mefistotelis said: though it prepares only small chunks of code linked together by the LVRT. You mean like how a Java compiler prepares only small chunks of code linked together by the JVM? Quote Link to comment
Mefistotelis Posted March 12, 2020 Author Report Share Posted March 12, 2020 17 minutes ago, JKSH said: You mean like how a Java compiler prepares only small chunks of code linked together by the JVM? There are similarities between Labview and Java, but there are also considerable differences: - Labview compiles to native machine code, Java compiles to universal platform-independent Java Bytecode - or in other words, LabVIEW is not virtualized - In Java, program flow is completely within the bytecode, while in LabVIEW the LVRT does most of the work, only calling small sub-routines from user data I guess the threads being created and data transferred by LVRT instead of user code can be considered another level of virtualization? On some level what Java does is similar - it translates chunks of bytecode to something which can be executed natively, and JRE merges such "chunks". Maybe the right way to phrase it is - LabVIEW has virtualized program flow but native user code execution, while Java just provides a Virtual Machine and gives complete control to the user code inside. Quote Link to comment
shoneill Posted March 13, 2020 Report Share Posted March 13, 2020 21 hours ago, Mefistotelis said: There are similarities between Labview and Java, but there are also considerable differences: I feel like you're stretching here to make similarities seem to exist when tey do not. Your argumentation for the LVRT is obviously flawed. If that were the case, every program which accesses Win32 functions is virtualised? No. It isn't. It's linking between actual compiled machine code. It is not platform-independent, it is compiled for the current machine architecture. So is VisualC++ vistualised because there are VC++ runtimes installed on Windows? To paraphrase "the Princess Bride". 1 Quote Link to comment
Mefistotelis Posted March 13, 2020 Author Report Share Posted March 13, 2020 You seem to have quoted wrong line - this one does not relate to virtualization nor any of what you wrote. No, VC++ programs are not virtualized - neither in CPU architecture area, nor in program flow area. So you are right with your VC++ characterization, but how exactly is that an argument here? If you're going after testing my definition: On 3/12/2020 at 2:42 PM, Mefistotelis said: LabVIEW has virtualized program flow but native user code execution Then in VC++, you write main() function yourself. You call sub-functions yourself. These calls are compiled into native code and executed directly by assembly "call" command (or equivalent, depending on architecture). Your code, compiled to native assembly, controls the execution. In LabView, you still run it on real CPU, which has "call" instructions - but there are no "call" lines in the compiled part. These are simple blocks which process input into output, and the program flow is completely not there. It is controlled by LVRT, and LabVIEW "pretends" for you, that the CPU works differently - it creates threads, and calls your small chunks based on some conditions, like existence of input data. It creates an environment where the architecture seem to mimic what we currently have in graphics cards - hence the initial post (though I know, it was originally mimicking PLL logic, complex GPUs came later). This is not how CPUs normally work. In other words, it VIRTUALIZES the program flow, detaching it from native architecture. Quote Link to comment
shoneill Posted March 13, 2020 Report Share Posted March 13, 2020 6 hours ago, Mefistotelis said: If you're going after testing my definition: You haven't offered a definition. You've made some statements using already-used words with a meaning which is (I think) different than you are trying to apply it. Hence my meme. LVRT does NOT organise program flow. That's done by the compiled code of the VI directly which is implementing the program flow. I do not know where you get your idea from. Again, I feel you have misunderstood what the LV Run-Time does. It is really comparable to the C++ Run-Time installations which exist on practically every windows system. So for me, if you agree that VC++ is not virtualised, by extension, you also agree that LabVIEW is not virtualised. Quote Link to comment
Mefistotelis Posted March 14, 2020 Author Report Share Posted March 14, 2020 I got the idea from reverse engineering LV. I can look at it when it works. I can look at the assembly chunks extracted from VI files. Can you share the source of your idea? As I see it, LVRT vs. MS STDC++ is definitely not comparable. Long list of differences. Both are shared libraries, both provide some API which implements some better or worse defined standard, both were compiled in VC++ - and that's all they have in common. For the meme, it is botched. The original was less direct, which was part of the comedy. It is possible that we do have different idea on what virtualization means though. Quote Link to comment
Popular Post Rolf Kalbermatter Posted March 16, 2020 Popular Post Report Share Posted March 16, 2020 (edited) On 3/14/2020 at 1:24 AM, Mefistotelis said: I got the idea from reverse engineering LV. I can look at it when it works. I can look at the assembly chunks extracted from VI files. Can you share the source of your idea? As I see it, LVRT vs. MS STDC++ is definitely not comparable. Long list of differences. Both are shared libraries, both provide some API which implements some better or worse defined standard, both were compiled in VC++ - and that's all they have in common. For the meme, it is botched. The original was less direct, which was part of the comedy. It is possible that we do have different idea on what virtualization means though. The main difference between LabVIEW and a C compiled file is that the compiled code of each VI is contained in that VI and then the LabVIEW Runtime links together these code junks when it loads the VIs. In C the code junks are per C source file, put into object files, and all those object files are then linked together when building the final LIB, DLL or EXE. Such an executable image still has relocation tables that the loader will have to adjust when the code is loaded into a different memory address than what its prefered memory address was defined to be at link time. But that is a pretty simple step. The LabVIEW runtime linker has to do a bit more of work, that the linker part of the C compiler has mostly already done. For the rest the LabVIEW execution of code is much more like a C compiled executable than any Virtual Machine language like Java or .Net's IL bytecode, as the compiled code in the VIs is fully native machine code. Also the bytecode is by nature address independent defined while machine code while possible to use location independent addresses, usually has some absolute addresses in there. It's very easy to jump to conclusions from looking at a bit of assembly code in the LabVIEW runtime engine but that does not usually mean that those conclusions are correct. In this case the code junks in each VI are real compiled machine code directly targetted for the CPU. In the past this was done through a proprietary compiler engine that created in several stages the final machine code. It already included the seperation where the diagram was first translated into a directed graph that then was optimized in several steps and the final result was then put through a target specific compiler stage that created the actual machine code. This was however done in such a way that it wasn't to easy to switch the target specific compiler stage on the fly initially so that cross compiling wasn't very easy to add when they developed the Real-Time addition to LabVIEW. They eventually improved that with an unified API to the compiler stages so that they could be switched on the fly to allow cross compilation for the real-time targets which eventually appeared in LabVIEW 7. LabVIEW 2009 finally introduced the DFIR (Dataflow Intermediate Representation) by formalizing the directed graph representation further so that more optimizations could be performed on it and it could eventually be used for LabVIEW 2010 as an input to the LLVM (Low-Level Virtual Machine) compiler infrastructure. While this would theoreticaly allow to leave the code in an intermediate language form that only is evaluated on the actual target at runtime, this is not what NI choose to do in LabVIEW for several reason. The LLVM creates fully compiled machine code for the target which is then stored (in the VI for a build executable or if code seperation is not enabled, otherwise in the compile cache). When you load a VI hierarchy into memory all the code junks for each VI are loaded into memory and based on linker information created at compile time and also stored in the VI, the linker in the LabVIEW runtime makes several modifications to the code junk to make it executable at the location it is loaded and calling into the correct other code junks that each VI consists of. This is indeed a bit more than what the PE loader in Windows needs to do when loading an EXE or DLL, but it isn't really very much different. The only real difference is that the linking of the COFF object modules into one bigger image has already been done by the C compiler when compiling the executable image and that LabVIEW isn't really using COFF or OMF to store its executables as it does all the loading and linking of the compiled code itself and doesn't need to rely on an OS specific binary image loader. Edited March 17, 2020 by Rolf Kalbermatter 3 2 Quote Link to comment
Mefistotelis Posted March 17, 2020 Author Report Share Posted March 17, 2020 Great explanation. 15 hours ago, Rolf Kalbermatter said: It's very easy to jump to conclusions from looking at a bit of assembly code in the LabVIEW runtime engine but that does not usually mean that those conclusions are correct. Yeah, what you wrote matches my findings and clears the wrong conclusions. So some actions which are performed by the linker in other languages, are performed by the loader in LabVIEW, and LabVIEW has its own loader built into LVRT, skipping OS loader (OS loader is only used to load LVRT). After the initial linking done by the loader, execution is fully native. This also explains why I didn't run across any list of relocations. Quote Link to comment
Aristos Queue Posted March 27, 2020 Report Share Posted March 27, 2020 A programming language exists in any Turing complete environment. Magic:The Gathering has now published enough cards to become Turing complete. You can watch such a computer be executed by a well-formed program. People might not like programming in any given language. That's fine -- every language has its tradeoffs, and the ones we've chosen for G might not be a given person's cup of tea. But to claim G isn't a language is factually false. G has the facility to express all known models of computation. QED. 1 Quote Link to comment
Aristos Queue Posted March 27, 2020 Report Share Posted March 27, 2020 On 3/16/2020 at 5:28 PM, Rolf Kalbermatter said: For the rest the LabVIEW execution of code is much more like a C compiled executable than any Virtual Machine language like Java or .Net's IL bytecode, as the compiled code in the VIs is fully native machine code. Also the bytecode is by nature address independent defined while machine code while possible to use location independent addresses, usually has some absolute addresses in there. This is not really true. I mean, it's kind of true, insofar as LV executes assembly level instructions, not byte code. But it is also misleading. LabVIEW doesn't ever get to a deep call stack. Suppose you have one program where Alpha VI calls Beta VI calls Gamma VI calls Delta VI and a second program which is just Omega VI. Now you run both and record the deepest call stack that any thread other than the UI thread ever achieves. What you'll find is that both programs have the same maximum stack depth. That's because all VIs are compiled into separate "chunks" of code. When a VI starts running, the address of any chunk that doesn't need upstream inputs is put into the execution queue. Then the execution threads start dequeuing and running each chunk. When a thread finishes a chunk, part of that execution will decrement the "fire count" of downstream chunks. When one of those downstream chunk's fire count hits zero, it gets enqueued. The call stack is never deeper than is needed to do "dequeue, call the dequeued address"... about depth 10 (there's some start up functions at the entry point of every exec thread). Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.