Jump to content

Mefistotelis

Members
  • Content Count

    38
  • Joined

  • Last visited

Community Reputation

1

About Mefistotelis

  • Rank
    More Active

LabVIEW Information

  • Version
    LabVIEW 7.0
  • Since
    1986

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I am testing my VI parser by extracting and re-creating all VI files from LV2014. One file is a bit different: vi.lib/addons/_office/_wordsub.llb/Word_Add_Document.vi This one has Block Diagram and Front Panel stored in "c" version of the heap format, which is already unusual for LV14 files. But there's another strange thing - it doesn't seem to have Type Mapping data, but still has Default Fill of Data Space (which normally can be only parsed knowing Type Mapping). It looks like instead there's another VI file stored inside, and Type Mapping comes from that internal one. The internal VI file seem to be referred to as "conglomerate resource" - block ID is CGRS. While it doesn't matter for what I'm doing, I'm wondering what is that - there's only one file in LV14 which looks like this. What is the purpose of storing that VI within another VI, while all other VIs are linked in a way which keeps them all as separate files?
  2. To answer myself here: - VCTP (VI Consolidated Types) ends with an array of "top level" types - the ones which are used in other sections. - The top level type ID of the type which is used for the salt is stored in CPC2. - The TypeID values used in diagrams are mapped through another level - TM80 contains list of top level types for that. Btw, I now have Default Fill of Data Space figured out. Here's LabVIEW on the right, and extracted VI file with the values on the left.
  3. What could there be to discuss in Integer formats? There's the 32-bit signed integer, many today applications use only this one and all is ok. Though LV is a programming environment, so it should have more types - 8-bit, 16-bit and 32-bit signed and unsigned variations. And today it would be hard to skip 64-bit, also signed and unsigned. So we have 8 types, and that's it, right? Those are the types allowed for user code; and they are also used by binary formats of LV, but that's definitely not all. Every engineer knows the binary-coded-decimal format - and of course it is used as well. Version numbers are always stored with it, ie 0x1400 means version 14, not 20. What else? Well, imagine you have a software from 16-bit era and want to make it compatible with modern standards. You previously used the minimal amount of bits to store various counts. But today, the 8-bit and 16-bit values may not be enough to store huge multidimensional arrays, counts and sizes. What do you do? You may just replace the values with longer ones, breaking compatibility. But - you may also invent a new, variable size integer format. Even many of them. And here they come: - a 16-bit value, in which if the highest bit is set, then another 16-bit value should be read and the highest bit is toggled off. Nice solution, but we do not get full 32-bit range. - a 16-bit signed value, but if it is -0x8000 - then the real value is stored in following 32-bit area. So it may take 6 bytes, but at least allows full 32-bit range. - a 8-bit value, but if it is 255 - then the real value is stored in following 16-bit area, and if it is 254 - real value follows in 32-bit area. So has either 1, 3 or 5 bytes, and allows full 32-bit range. For some reason, 64-bit values are not supported (at least not in VL14 - I didn't looked at newer versions). There is also signed variation. In total, we now have 13 different ways of storing integer. Now, most of the values are stored in big endian format, but single ones here and there are left in little endian. So we do need to include endianness as well. Though I didn't found any of the variable size values in little endian. Yet. In case you're interested in how exactly the variable size values are stored, here is my implementation of reading and creating these: https://github.com/mefistotelis/pylabview/blob/master/LVmisc.py
  4. Great explanation. Yeah, what you wrote matches my findings and clears the wrong conclusions. So some actions which are performed by the linker in other languages, are performed by the loader in LabVIEW, and LabVIEW has its own loader built into LVRT, skipping OS loader (OS loader is only used to load LVRT). After the initial linking done by the loader, execution is fully native. This also explains why I didn't run across any list of relocations.
  5. I got the idea from reverse engineering LV. I can look at it when it works. I can look at the assembly chunks extracted from VI files. Can you share the source of your idea? As I see it, LVRT vs. MS STDC++ is definitely not comparable. Long list of differences. Both are shared libraries, both provide some API which implements some better or worse defined standard, both were compiled in VC++ - and that's all they have in common. For the meme, it is botched. The original was less direct, which was part of the comedy. It is possible that we do have different idea on what virtualization means though.
  6. You seem to have quoted wrong line - this one does not relate to virtualization nor any of what you wrote. No, VC++ programs are not virtualized - neither in CPU architecture area, nor in program flow area. So you are right with your VC++ characterization, but how exactly is that an argument here? If you're going after testing my definition: Then in VC++, you write main() function yourself. You call sub-functions yourself. These calls are compiled into native code and executed directly by assembly "call" command (or equivalent, depending on architecture). Your code, compiled to native assembly, controls the execution. In LabView, you still run it on real CPU, which has "call" instructions - but there are no "call" lines in the compiled part. These are simple blocks which process input into output, and the program flow is completely not there. It is controlled by LVRT, and LabVIEW "pretends" for you, that the CPU works differently - it creates threads, and calls your small chunks based on some conditions, like existence of input data. It creates an environment where the architecture seem to mimic what we currently have in graphics cards - hence the initial post (though I know, it was originally mimicking PLL logic, complex GPUs came later). This is not how CPUs normally work. In other words, it VIRTUALIZES the program flow, detaching it from native architecture.
  7. There are similarities between Labview and Java, but there are also considerable differences: - Labview compiles to native machine code, Java compiles to universal platform-independent Java Bytecode - or in other words, LabVIEW is not virtualized - In Java, program flow is completely within the bytecode, while in LabVIEW the LVRT does most of the work, only calling small sub-routines from user data I guess the threads being created and data transferred by LVRT instead of user code can be considered another level of virtualization? On some level what Java does is similar - it translates chunks of bytecode to something which can be executed natively, and JRE merges such "chunks". Maybe the right way to phrase it is - LabVIEW has virtualized program flow but native user code execution, while Java just provides a Virtual Machine and gives complete control to the user code inside.
  8. I've seen traces of very old discussions about how to classify LabVIEW, so I assume the subject is well known and opinions are strong. Though I didn't really find any comprehensive discussion, which is a bit surprising. The discussion seem to always lean towards whether there is really a compiler in LabVIEW - and yes there is, though it prepares only small chunks of code linked together by the LVRT. Today I looked at the trailer of Doom Ethernal and that made me notice interesting thing - if LabVIEW is a programming environment, maybe Doom should be classified as one too? Graphics cards are the most powerful processors in today PCs. They can do a lot of multi-threaded computations, very fast and with large emphasis on concurrency. Do do that, they prepare a small programs, ie. in a C-like shader language if the graphics API is OpenGL (we call them shaders as originally they were used for shades and simple effects; but now they're full fledged programs which handle geometry, collisions and other aspects of the game). Then, a user mode library commonly known as Graphics Driver, compiles that code into ISA assembly for specific card model, and sends to Execution Units of the graphics card. Some shaders are static, others are dynamic - generated during gameplay and modified on the go. So, in Doom, like in LabVIEW: - You influence the code by interacting with a graphics environment using mouse and keyboard - There is a compiler which prepares machine code under the hood, and it's based on LLVM (at least one of major GFX card manufacturers uses LLVM in their drivers) - There is a huge OS-dependent shared library which does the processing of the code (LVRT or 3D driver) - The code gets compiled in real time as you go - There is large emphasis on concurrent programming, the code is compiled into small chunks which work in separate threads You could argue that the user actions in Doom might not allow to prepare all elements of the real programming language - but we really don't know. Maybe they do. Maybe you can ie. force a loop added to the code by a specific movement at specific place. I often read that many arguments against LabVIEW are caused by people don't really understanding the G Language, having little experience with programming in it. Maybe it's the same with Doom - if you master it in a proper way, you can generate any code clause you want. Like LabVIEW, Doom is a closed source software with no documented formats.
  9. pylabview could do it, but it looks like there are differences in "Virtual Instrument Tag Strings" section, and the parser can't read that section ATM: So - only NI can help you, unless you are willing to help in development of pylabview.
  10. Front Panel is now proper XML (though I only support "FPHb" now, older Labview has FPHP instead, and latest ones use FPHc - those are not parsed, as I don't really need them for my use). Block Diagram is stored in exactly the same way, so I got "BDHb" support for free. I used the same general format LabVIEWs NED uses for XML panels. I can now either work to read "DFDS" section as well - it's quite complex as it isn't stand-alone section, meaning it needs data from other sections to parse. Or I can ignore default data, and start working on Front Panel re-creation without that.
  11. Working on Front Panel now. This is what pylabview generates: <?xml version='1.0' encoding='utf-8'?> <SL__rootObject ScopeInfo="0" class="oHExt" uid="1"> <root ScopeInfo="0" class="supC" uid="10"> <objFlags ScopeInfo="1">010000</objFlags> <bounds ScopeInfo="1">0000000000000000</bounds> <MouseWheelSupport ScopeInfo="1">00</MouseWheelSupport> <ddoList ScopeInfo="0" elements="61"> <SL__arrayElement ScopeInfo="1" uid="64" /> <SL__arrayElement ScopeInfo="1" uid="96" /> And this is the same part generated by NED within LabView: <SL__rootObject class="oHExt" uid="1"> <root class="supC" uid="10"> <objFlags>65536</objFlags> <bounds>(0, 0, 0, 0)</bounds> <MouseWheelSupport>0</MouseWheelSupport> <ddoList elements="61"> <SL__arrayElement uid="64"/> <SL__arrayElement uid="96"/>
  12. While I'm still working on parsers for all types of VI connectors, I also noticed the VICD block. Some old articles on its content: https://web.archive.org/web/20110101230244/http://zone.ni.com/devzone/cda/tut/p/id/5315 https://web.archive.org/web/20120115152126/http://www.ni.com/devzone/lvzone/dr_vi_archived6.htm Not sure if I'll be decoding that section further though - I believe the connectors block and default data block will be the only ones required to re-create front panel. The binary format of Front Panel should be easy to figure out by comparing binary data to the XML representation which can be generated in newer LV versions; anybody tried that?
  13. I need some example files to test my tool. I am running it on all the LV14 standard VIs, but it looks like these are not using all possible features of the VI format. In particular, I am not seeing use of some Refnum connectors. For example the type which 'Vi Explorer' calls "queue". Anyone knows what to click in LabVIEW to get such connector within the VI file? EDIT: Forced myself to read LabView help, now I know how the Queue works. Still didn't found some refnums in documentation, ie. Device Refnum.
  14. On linux, you may just use ldd: $ ldd my_binary Or if the loader refuses to run it on current platform, but you still need to know: $ readelf -d my_binary For Windows - there are tons of "PE Viewers" and "PE Explorers" which lets you look at import table.
  15. I don't think I can be completely convinced to your point: I agree refactoring LV doesn't make sense now, but I think something should've been done years ago to allow the support. Even if no conversion was done at a time, as soon as multi-lingual versions of Windows and Mac OS started popping out, it was obvious the conversion will be an issue. I'm not talking there should be a conversion right away, just that even then, it was obvious that storing just the information which code page is in use, would be a prudent choice. Now for really implementing the conversion: It wouldn't be needed for the OS to support anything - `libconv` can be compiled even in Watcom C (I'm not stating libconv should have been used, only stating that doing codepage conversion even in DOS was not an issue). Around 1994, I wrote a simple code page conversion routine myself, in Turbo Pascal. Not for unicode, it converted directly between a few code pages, with a few simple translation tables. It also had a function to convert to pure ASCII - replace every national character with the closest english symbol (or group of symbols). That would be good enough support for pre-unicode OSes - it wasn't really necessary to support all unicode characters, only to allow portability between platforms which LabVIEW supported. Finally, I don't think LabVIEW uses native controls (buttons etc.) from the OS - it treats the native window as a canvas, and draws its own controls. So support on of multi-lingual text in controls is not bound to the OS in any way. For implementation details within LabVIEW: That would be more tricky, and I understand possible issues with that. LabVIEW operates on binary data from various sources, and if the VI tells it to print a string, it doesn't keep track whether that string came from the VI and has known code page, or come from a serial port with a device talking in different encoding. There are still ways to solve such issues, just not completely transparent for the user. Plus, most string used in user interface are not really changing at runtime. I didn't actually knew that LabVIEW is considered "classic" version and is being replaced by NXG. That is strong argument against any refactoring of the old code. The conversion I introduced to my extractor works well, so this shouldn't be much of an issue for me.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.