Jump to content

Rolf Kalbermatter

Members
  • Content Count

    3,022
  • Joined

  • Last visited

  • Days Won

    149

Everything posted by Rolf Kalbermatter

  1. Trying to get at the data pointer of control objects, while maybe possible wouldn't be very helpful since the actual layout, very much like for the VI dataspace pointer has and will change with LabVIEW versions very frequently. Nothing in LabVIEW is supposed to interface to these data spaces directly other than the actual LabVIEW runtime and therefore there never has been nor is nowadays any attempt to keep those data structures consistent across versions. If it suits the actual implementation, the structures can be simply reordered and all the code that interfaces to external entities includi
  2. Well they do have (ar at least had) an Evaluation version but that is a specially compiled version with watermark and/or limited functionality. The license manager included in the executable is only a small part of the work. The Windows version uses the FlexLM license manager but the important thing is the server binding to their license server(s). Just hacking a small license manager into the executable that does some verification is not that a big part. Tieing it into the existing license server infrastructure however is a major effort. And setting up a different license serv
  3. LabVIEW on non-Windows platforms has no license manager built in. This means that if you could download the full installer just like that, there would be no way for NI to enforce anyone to have a valid license when using it. So only the patches are downloadable without a valid SSP subscription, since they are only incremental installers that add to an existing full install, usually replacing some files. That's supposedly also the main reason holding back a release of the Community editions on non-Windows platforms. I made LabVIEW run on Wine way back with LabVIEW 5.0 or so, also prov
  4. Generaly speaking this is fine for configuration or even data files that the installer puts there for reading at runtime. However you should not do that for configuration or data files that your application intends to write to at runtime. If you install your application in the default location (<Program Files>\<your application directory>) you do not have write access to this folder by default since Windows considers it a very bad thing for anyone but installers to write in that location. When an application tries to write there, Windows will redirect it to a user specific shadow c
  5. Actually, I'm using the System Configuration API instead. Aside from the nisysconfig.lvlib:Initialize Session.vi and nisysconfig.lvlib:Create Filter.vi and nisysconfig.lvlib:Find Hardware.vi everything is directly accessed using property nodes from the SysConfig API shared library driver and there is very little on the LabVIEW level that can be wrongly linked to.
  6. I haven't benchmarked the FIFO transfer in respect to element size, but I know for a fact that the current FIFO DMA implementation from NI-RIO does pack data to 64-bit data boundaries. This made me change the previous implementation in my project from transfering 12:12 bit FXP signed integer data to 16-bit signed integers since 4 12-bit samples are internally transferd as 64-bit anyhow over DMA, just as 4 16-bit samples are. (In fact I'm currently always packing two 16 bit integers into a 32-bit unsigned integer for the purpose of the FIFO transfer, not because of performance but because of t
  7. Ahhh, I see, blocking when you request more data than there is currently available. Well I would in fact not expect the Acquire Read Region to perform much differently in that aspect. I solved this in my last project a little differently though. Rather than calling FIFO Read with 0 samples to read, I used the <remaining samples> from the previous loop iteration to calculate an estimation for the amount of samples to read similar to this formula (<previous remaining samples> + (<current sample rate> * <measured loop interval>) to determine the number of samples to reque
  8. Actually there is another option to avoid the typecast copy. Because Typecast is in fact not just a type reinterpretation like in C but also does byte swapping on all Little Endian platforms which are currently all but the old VxWorks based cRIO platforms, since the use a PowerPC CPU which by default operates in Big Endian (this CPU can support both Endian modes but is typically always used in Big Endian mode. If all you need is a byte array then use the String to Byte Array node instead. This is more like a C Typecast as the data type in at least Classic LabVIEW doesn't change at all (s
  9. The application builder uses internally basically a similar method to retrieve the icons as an image data cluster to then save to the exe file resources. So whatever corruption happens in the build executable is likely the root cause for both the yellow graph image and the yellow icon. And it's likely dependent on something like the graphics card or its driver too or something similar as otherwise it would have been long ago found and fixed (if it happened on all machines).
  10. You can sign up and do it too 😀
  11. I remember some similar issues in the past on plain Windows. Not exactly the same but it seemed to have to do with the fact that LabVIEW somehow didn't get the Window_Enter and Window_Leave events from Windows anymore or at least not properly. And it wasn't just LabVIEW itself but some other applications started to behave strange too. Restarting Windows usually fixed it. And at some point it went away just as it came, most likely a Windows update or something. So I think it is probably something about your VM environments cursor handling that gets Windows to behave in a way that doesn't s
  12. Because your logic is not robust. First you seem to use termination character but your binary data can contain termination character bytes too and then your loop aborts prematurely. Your loop currently aborts when error == True OR erroCode != <bytes received is bytes requested>. Most likely your VISA timeout is a bit critical too? Also note that crossrulz reads 2 bytes for the <CR><LF> termination character while your calculation only accounts for one. So if what you write is true, there still would be a <LF> character in the input queue that will show up at t
  13. I don't think that would work well for Github and similar applications. FINALE seems to work (haven't tried it yet) by running a LabVIEW VI (in LabVIEW obviously) to generate HTML documentation from a VI or VI hierarchy. This is then massaged into a format that their special WebApp can handle to view the different elements. Github certainly isn't going to install LabVIEW on their servers in order to run this tool. They might (very unlikely) use Python code created by Mefistoles and published here to try to scan project repositories to see if they contain LabVIEW files. But why would they
  14. It used to be present in the first 50 on the TIOBE index, but as far as I remember the highest postion was somewhere in the mid 30ies. The post you quoted states that it was at 37 in 2016. Of course reading back my comment I can see why you might have been confused. There was a "n" missing. 😀 Github LabVIEW projects are few and far in between. Also I'm not sure if Github actively monitors for LabVIEW projects and based on what criteria (file endings, mention of LabVIEW in description, something else?).
  15. Interesting theory. Except that I don't think LabVIEW never got below 35 in that list! 😀
  16. Wasn't aware that LabVIEW still uses SmartHeap. I knew it did in LabVIEW 2.5 up to around 6.0 or so but thought you guys had dropped that when the memory managers of the underlaying OS got smart enough themselves to not make a complete mess of things (that specifically applied to the Win3.1 memory management and to a lesser extend to the MacOS Classic one).
  17. I symply use syslog in my applications and then a standard syslog server to do whatever is needed for. Usually the messages are viewed in realtime during debugging and sometimes simply dumped to disk afterwards from within the syslog server, but there is seldom much use of it once the system is up and running. If any form of traceability is required, we usually store all relevant messages into a database, quite often that is simply a SQL Server express database.
  18. I've seen that with clients I have been working for on LabVIEW related projects. A new software development manager came in with a clear dislike for LabVIEW as it is only a play toy. The project was canceled and completely rebuild based on a "real" PLC instead of a NI realtime controller. The result was a system that had a lot less possibilities and a rather big delay in shipping the product. Obviously I didn't work on that "new" project and don't quite know if it was ever installed at the customer site. That said, we as company are reevaluating our use of LabVIEW currently. We have no pl
  19. That's the standard procedure for path storing in LabVIEW. If the path root is different, LabVIEW will normally store the abolute path, otherewise a relative path relative to the entity that contains the path. Using relative paths is preferable for most things, but can have issues when you try to move files around outside of LabVIEW (or after you compiled everything into an exe and/or ppl). But absolute paths are not a good solution either for this. LabVIEW also knows so called symbolic path roots. That are paths starting with a special keyword such as <vilib>, <instrlib>
  20. Sure it was a Windows extender (Win386) but basically built fully on the DOS/4GW extender they licensed from Rational Systems. It all was based on the DPMI specifiation as basically all DOS Extenders were. Windows 3.x was after all nothing more than a graphical shell on top of DOS. You still needed a valid DOS license to install Windows on top of it.
  21. One correction. the i386 is really always a 32 bit code resource. LabVIEW for Windows 3.1 which was a 16-bit operating system was itself fully 32-bit using the Watcom 32-bit DOS extender. LabVIEW was compiled using the Watcom C compiler which was the only compiler that could create 32-bit object code to run under Windows 16-bit, by using the DOS extender. Every operating system call was translated from the LabVIEW 32-bit execution environment into the required 16-bit environment and back after the call through the use of the DOS extender. But the LabVIEW generated code and the CINs were f
  22. I just made them up! I believe NI used the 'i386' as a FourCC identifier for the Win32 CINs. From my notes: i386 Windows 32-bit POWR PowerPC Mac PWNT PowerPC on Windows POWU PowerPC on Unix M68K Motorola 680xx Mac sprc Sparc on Solaris PA<s><s> PA Risc on HP Unix ux86 Intel x86 on Unix (Linux and Solaris) axwn Alpha on Windows NT axln Alpha on Linux As should be obvious some of these platforms never saw the light of an official release and all the 64-bit vari
  23. Well if you are correct is of course not debatable at this time but there is definitely a chance for this. Don't disregard the VIs interfacing to the shared library though. A wrongly setup Call Library node is at least as likely if not more. Obvously there seems to be something off where some code tries to access a value it believes to be a pointer but instead is simply an (integer) number (0x04). Try to locate the code crash more closely by first adding more logging around the function calls to the library to see where the offending call may be, and if that doesn't help by disabling code
  24. CINs have nothing to do with LabWindows CVI, aside of the fact that there was a possibility to create them in LabWindows CVI. They were the way of embedding external code in a LabVIEW VI, before LabVIEW got the Call library Node. They were based on the idea of code resources that every program on a Macintosch consisted of before Apple moved to Mac OS X. Basically any file on a Mac consisted of a data fork that contained whatever the developer decided to be the data and a resource fork that was the model after which the LabVIEW resource format was modelled. For the most part the LabVIEW re
  25. I first thought it may have to do with the legacy CINs but it doesn't look like that although it may be a similar idea. On second thought it actually looks like it could be the actual patch interface for the compiled object code of the VIs. It seems the actual code dispatch table that is generated during compiling of a VI. As such I doubt it is very helpful for anything unless you want to modify the generated code somehow after the fact.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.