Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. Well they do have (ar at least had) an Evaluation version but that is a specially compiled version with watermark and/or limited functionality. The license manager included in the executable is only a small part of the work. The Windows version uses the FlexLM license manager but the important thing is the server binding to their license server(s). Just hacking a small license manager into the executable that does some verification is not that a big part. Tieing it into the existing license server infrastructure however is a major effort. And setting up a different license server infrastructure is most likely even more work. That is where the main effort is located. I have a license manager of my own that I have included in a compiled library (shared library part not the LabVIEW interface itself) and while it was some work to develop and making it work on all LabVIEW platforms, that pales in comparison to what would be needed to make an online license server and adding a real e-commerce interface to it would be even more work.
  2. LabVIEW on non-Windows platforms has no license manager built in. This means that if you could download the full installer just like that, there would be no way for NI to enforce anyone to have a valid license when using it. So only the patches are downloadable without a valid SSP subscription, since they are only incremental installers that add to an existing full install, usually replacing some files. That's supposedly also the main reason holding back a release of the Community editions on non-Windows platforms. I made LabVIEW run on Wine way back with LabVIEW 5.0 or so, also providing some patches to the Wine project along the lines. It was a rough ride and far from perfect even with the Wine patches applied but it sort of worked. Current Wine is a lot better but so are the requirements of current LabVIEW in terms of the Win32 API that it exercises. That NI package manager won't work is not surprising, it is most likely HEAVILY relaying on .Net functionality and definitely not developed towards the .Net Core specification but rather the full .Net release. I doubt you can get it to work with .Net prior to at least 4.6.2.
  3. Generaly speaking this is fine for configuration or even data files that the installer puts there for reading at runtime. However you should not do that for configuration or data files that your application intends to write to at runtime. If you install your application in the default location (<Program Files>\<your application directory>) you do not have write access to this folder by default since Windows considers it a very bad thing for anyone but installers to write in that location. When an application tries to write there, Windows will redirect it to a user specific shadow copy, so when you then go and check in the folder in explorer you may wonder why you only see the old data from before the write. This is since on reading in File Explorer, Windows will create a view of the folder with the original files in the folder if they exist and showing the shadow copy files version only for those files that didn't exist to begin with. Also the shadow copy is stored in the user specific profile so if you login with a different user your application will suddenly see the old settings. Your writeable files are supposed to either be in a subdirectory of the current users or the common Documents folder (if a user is supposed to access those files in other ways, such as data files generated by your application), or in a subdirectory inside the current user or common <AppSettings> directory (for configuration files that you rather do not want your user to tamper with by accident). They are still accessible but kind of invisible in the by default invisible <AppSettings> directory. The difference between current user and common location needs to be taken into account depending if the data written to the files is meant to be accessible only to the current user or to any user on that computer.
  4. Actually, I'm using the System Configuration API instead. Aside from the nisysconfig.lvlib:Initialize Session.vi and nisysconfig.lvlib:Create Filter.vi and nisysconfig.lvlib:Find Hardware.vi everything is directly accessed using property nodes from the SysConfig API shared library driver and there is very little on the LabVIEW level that can be wrongly linked to.
  5. I haven't benchmarked the FIFO transfer in respect to element size, but I know for a fact that the current FIFO DMA implementation from NI-RIO does pack data to 64-bit data boundaries. This made me change the previous implementation in my project from transfering 12:12 bit FXP signed integer data to 16-bit signed integers since 4 12-bit samples are internally transferd as 64-bit anyhow over DMA, just as 4 16-bit samples are. (In fact I'm currently always packing two 16 bit integers into a 32-bit unsigned integer for the purpose of the FIFO transfer, not because of performance but because of the implementation in the FPGA which makes it faster to always grab two 16-bit memory locations at once and push them into the FIFO. Otherwise the memory read loop would take double as much time (or require a higher loop speed) to be able to catch up with the data acquisition. 64 12-bit ADC samples at 75 kHz add up to quite some data that needs to be pushed into the FIFO. I might consider to push this up to 64-bit FIFO elements just to see if it makes a performance difference, but the main problem I have is not the FIFO but rather to get the data pushed onto the TCP/IP network in the RT application. Calling directly libc:send() to push the data into the network socket stack rather than through TCP Write seems to have more effect.
  6. Ahhh, I see, blocking when you request more data than there is currently available. Well I would in fact not expect the Acquire Read Region to perform much differently in that aspect. I solved this in my last project a little differently though. Rather than calling FIFO Read with 0 samples to read, I used the <remaining samples> from the previous loop iteration to calculate an estimation for the amount of samples to read similar to this formula (<previous remaining samples> + (<current sample rate> * <measured loop interval>) to determine the number of samples to request. Works flawlessly, saves a call to Read FIFO with 0 samples to read (which I do not expect to take any measurable execution time, but still). I need to do this since the sampling rate is in fact externaly determined through a quadrature encoder so can dynamically change in a pretty large range. But unless you can do all data intense work inside the IPE as in the example you show, the Acquire FIFO Read Region offers no advantage in terms of execution speed to a normal FIFO Read.
  7. Actually there is another option to avoid the typecast copy. Because Typecast is in fact not just a type reinterpretation like in C but also does byte swapping on all Little Endian platforms which are currently all but the old VxWorks based cRIO platforms, since the use a PowerPC CPU which by default operates in Big Endian (this CPU can support both Endian modes but is typically always used in Big Endian mode. If all you need is a byte array then use the String to Byte Array node instead. This is more like a C Typecast as the data type in at least Classic LabVIEW doesn't change at all (somewhat sloppily stated: a string is simply a byte array with a different wire color ๐Ÿ˜€). If you need a typecast sort of thing because your numeric array is something else than a byte array, but don't want endianizing you could with a bit of low level byte shuffling (preferably in C but with enough persistence it could even be done in LabVIEW diagram although not 100% safe) you could write a small function that swaps out two handles with additional correction of the numElm value in the array and do this as a virtually zero cost operation. I'm not sure the Acquire Write Region would save you as much as you hope for this. The DVR returned still needs to copy your LabVIEW data array into the DMA buffer and there is also some overhead from protecting the DVR access from the DMA routine which will attempt to read the data. Getting rid of the inherent copy in the Typecast function is probably more performant. Why would the Read FIFO method block with high CPU usage? I'm not sure what you refer to here. Sure it needs to allocated an array of the requested size and then copy the data from the DMA buffer into this array and that takes of course CPU but if you don't require more data than there is currently in the DMA buffer it does not "block", it simply has to do some considerable work. Depending on what you are then doing with the data you do not save anything by using the Acquire Region variant. This variant is only useful if you can do all of the operation on the data inside the IPE in which you access the actual data. If you only do use the IPE to read the data and then pass it outside of the IPE as normal LabVIEW array there is absolutely nothing to be gained by using the Acquire Read Region variant. In the case of the Read FIFO, the array is generated (and copied into) in the Read FIFO node, in the Acquire Read Region version it is generated (and copied into) as soon as the wire crosses the IPE border. It's pretty much the same effort and there is really nothing LabVIEW could do to avoid that. The DVR data is only inside the IPE accessible without creating a full data copy. I did recently a project where I used a Acquire Read Region but found that it had no real advantage to the normal FIFO Read, since all I did with the data was in fact to pass it on to a TCP Read. As soon as the data needs to be send to TCP Read, the data buffer has to be allocated anyhow as a real LabVIEW handle and then it doesn't really matter if that happens inside the FIFO Read, or inside the IPE accessing the DVR from the FIFO Region. My loop timing was anyhow heavily dominated by the TCP Write. As long as I only read the data from the FIFO, my loop could run consistently at 10.7MB/s with a steady 50ms interval with very little jitter. As soon as I added the TCP Write the loop timing jumped to 150 ms an steadily increased until the FIFO was overflowing. My tests showed that I could go up to 8MB/s with a loop interval timing of around 150 ms +- 50ms jitter without the loop starting to run off. This was also caused by the fact that the ethernet port was really only operating at 100Mb/s due to the switch I was connected to not supporting 1Gb/s. The maximum theoretical throughput at 100Mb/s is only 12.5MB/s and the realistic throughput is usually at around 60% of that. But even with a 1Gb/s switch the overhead of TCP Write was dominating the loop by far, making other differences including the use of an optimized Typecast without any Endian normalization compared to the normal LabVIEW Typecast which did Endian normalization fall into unmeasurable noise. And it's nitpicking really and likely only costs a few ns execution time extra but the calculation of the number of scans inside the loop to resize the array to a number of scans and number of cannels should be all done in integer space anyhow and using the Quotient & Reminder. Not to much use in using Double Precision values for all these for something that inherently should be integer numbers anyhow. There is even a potential for a wrong number of scans in the 2D array since the ToI32 conversion number does standard rounding, so could end up one more than there are full scans in the read data.
  8. The application builder uses internally basically a similar method to retrieve the icons as an image data cluster to then save to the exe file resources. So whatever corruption happens in the build executable is likely the root cause for both the yellow graph image and the yellow icon. And it's likely dependent on something like the graphics card or its driver too or something similar as otherwise it would have been long ago found and fixed (if it happened on all machines).
  9. You can sign up and do it too ๐Ÿ˜€
  10. I remember some similar issues in the past on plain Windows. Not exactly the same but it seemed to have to do with the fact that LabVIEW somehow didn't get the Window_Enter and Window_Leave events from Windows anymore or at least not properly. And it wasn't just LabVIEW itself but some other applications started to behave strange too. Restarting Windows usually fixed it. And at some point it went away just as it came, most likely a Windows update or something. So I think it is probably something about your VM environments cursor handling that gets Windows to behave in a way that doesn't sit well with LabVIEW.
  11. Because your logic is not robust. First you seem to use termination character but your binary data can contain termination character bytes too and then your loop aborts prematurely. Your loop currently aborts when error == True OR erroCode != <bytes received is bytes requested>. Most likely your VISA timeout is a bit critical too? Also note that crossrulz reads 2 bytes for the <CR><LF> termination character while your calculation only accounts for one. So if what you write is true, there still would be a <LF> character in the input queue that will show up at the next VISA Read that you perform. As to PNG containing checksum: Yes it does, multiple even! Each data chunk in the image data is compressed with the zlib deflate algorithme and this contains a CRC32 checksum!
  12. I don't think that would work well for Github and similar applications. FINALE seems to work (haven't tried it yet) by running a LabVIEW VI (in LabVIEW obviously) to generate HTML documentation from a VI or VI hierarchy. This is then massaged into a format that their special WebApp can handle to view the different elements. Github certainly isn't going to install LabVIEW on their servers in order to run this tool. They might (very unlikely) use Python code created by Mefistoles and published here to try to scan project repositories to see if they contain LabVIEW files. But why would they bother with that? LabVIEW is not like it is going to be the next big hype in programming ever. What happened with Python is pretty unique and definitely not going to happen for LabVIEW. There never was a chance for that and NI's stance to not seek international standardization didn't help that, but I doubt they could have garnered enough wide spread support for this even if they had seriously wanted to. Not even if they had gotten HP to join the bandwagon, which would have been as likely as the devil going to church ๐Ÿ˜€.
  13. It used to be present in the first 50 on the TIOBE index, but as far as I remember the highest postion was somewhere in the mid 30ies. The post you quoted states that it was at 37 in 2016. Of course reading back my comment I can see why you might have been confused. There was a "n" missing. ๐Ÿ˜€ Github LabVIEW projects are few and far in between. Also I'm not sure if Github actively monitors for LabVIEW projects and based on what criteria (file endings, mention of LabVIEW in description, something else?).
  14. Interesting theory. Except that I don't think LabVIEW never got below 35 in that list! ๐Ÿ˜€
  15. Wasn't aware that LabVIEW still uses SmartHeap. I knew it did in LabVIEW 2.5 up to around 6.0 or so but thought you guys had dropped that when the memory managers of the underlaying OS got smart enough themselves to not make a complete mess of things (that specifically applied to the Win3.1 memory management and to a lesser extend to the MacOS Classic one).
  16. I symply use syslog in my applications and then a standard syslog server to do whatever is needed for. Usually the messages are viewed in realtime during debugging and sometimes simply dumped to disk afterwards from within the syslog server, but there is seldom much use of it once the system is up and running. If any form of traceability is required, we usually store all relevant messages into a database, quite often that is simply a SQL Server express database.
  17. I've seen that with clients I have been working for on LabVIEW related projects. A new software development manager came in with a clear dislike for LabVIEW as it is only a play toy. The project was canceled and completely rebuild based on a "real" PLC instead of a NI realtime controller. The result was a system that had a lot less possibilities and a rather big delay in shipping the product. Obviously I didn't work on that "new" project and don't quite know if it was ever installed at the customer site. That said, we as company are reevaluating our use of LabVIEW currently. We have no plans to abandon it anytime soon, but other options are certainly not excluded from being used whenever it makes sense and there have been more and more cases that would have been solved in the past in LabVIEW without even thinking twice, but are currently seriously looked at to be done in other development platforms. And I know that this trend has been even stronger at many other companies in the last 5 years or so. My personal feeling is that the amount of questions in general has dropped. The decline is less visible on the NI fora, but all the other alternative fora including LAVA, have seen a rather steep decline in activity. Much of the activity on the NI fora tends to be pretty basic beginner problems or installation pericles and pretty little advanced topics. It could be because all the important questions for more advanced topics already have been tackled but more likely it is because the people who traditionally use LabVIEW for advanced usage are very busy in their work and the others are dabbling their feet in it, come with their beginner problems to the NI fora and then move on to something else rather than developing to the intermediate and advanced level of LabVIEW use. Also with exception of a few notable people, participation of NI employees in the fora seems nowadays almost non-existent and except the aforementioned notable exceptions, many times if an NI empoyee eventually reacts after a thread has stayed unanswered from other fora members for several days, doesn't go very much beyond the standard questions of "What LabVIEW version are you using? Have you made sure the power plug is attached?" and other such pretty unhelpful things. This is especially painful when the post in question clearly states a problem that is not specific to a certain version and pretty well known to anyone who would even bother to start up just about any LabVIEW version and try the thing mentioned! It sometimes makes me want to tell that blue eagle (รคah is that a greenie now?) to just shut up.
  18. That's the standard procedure for path storing in LabVIEW. If the path root is different, LabVIEW will normally store the abolute path, otherewise a relative path relative to the entity that contains the path. Using relative paths is preferable for most things, but can have issues when you try to move files around outside of LabVIEW (or after you compiled everything into an exe and/or ppl). But absolute paths are not a good solution either for this. LabVIEW also knows so called symbolic path roots. That are paths starting with a special keyword such as <vilib>, <instrlib>, and similar. This is a sort of relative path to whatever the respective path is configured in the actual LabVIEW instance. They refer by default to the local vi.lib, inst.lib and so on directories but can be reconfigured in the LabVIEOW options (but really shouldn't unless you know exactly what you are doing). Chances to mess up your projects this way are pretty high.
  19. Sure it was a Windows extender (Win386) but basically built fully on the DOS/4GW extender they licensed from Rational Systems. It all was based on the DPMI specifiation as basically all DOS Extenders were. Windows 3.x was after all nothing more than a graphical shell on top of DOS. You still needed a valid DOS license to install Windows on top of it.
  20. One correction. the i386 is really always a 32 bit code resource. LabVIEW for Windows 3.1 which was a 16-bit operating system was itself fully 32-bit using the Watcom 32-bit DOS extender. LabVIEW was compiled using the Watcom C compiler which was the only compiler that could create 32-bit object code to run under Windows 16-bit, by using the DOS extender. Every operating system call was translated from the LabVIEW 32-bit execution environment into the required 16-bit environment and back after the call through the use of the DOS extender. But the LabVIEW generated code and the CINs were fully 32-bit compiled code. While the CINs were in the Watcom REX object file format, and LabVIEW for Windows 32-bit later by default used the standard Windows COFF object format for the CINs resources, it could even under Windows 32-bit still load and use the Watcom generated CINs in REX object file format. The main difference was simply that a REX object file format had a different header than a COFF object file format but the actual compiled object code in there was in both cases simply i386 32-bit object code. Also LabVIEW 2021 or more likely 2022 is very likely going to have an 'mARM' platform too. ๐Ÿ˜ƒ
  21. I just made them up! I believe NI used the 'i386' as a FourCC identifier for the Win32 CINs. From my notes: i386 Windows 32-bit POWR PowerPC Mac PWNT PowerPC on Windows POWU PowerPC on Unix M68K Motorola 680xx Mac sprc Sparc on Solaris PA<s><s> PA Risc on HP Unix ux86 Intel x86 on Unix (Linux and Solaris) axwn Alpha on Windows NT axln Alpha on Linux As should be obvious some of these platforms never saw the light of an official release and all the 64-bit variants as well as the vxworks versions never were created at all, as CINs were long considered obsolete before VxWorks was released in 8.2 and the first 64-bit version of LabVIEW was released with LabVIEW 2009 for Windows. There even was some work done for a MIPS code generator at some point. And yes the problem about adding multiple CIN resources for different architectures was that it relied on the 'plat" resource inside the VI. So you only could add one CIN resource per CIN name into a VI, rather than multiple ones. All platforms except the i386 and Alpha, used to be Big Endian. Later ARM came as an additional Little Endian target to the table. Currently only the VxWorks target is still a supported Big Endian platform in LabVIEW.
  22. Well if you are correct is of course not debatable at this time but there is definitely a chance for this. Don't disregard the VIs interfacing to the shared library though. A wrongly setup Call Library node is at least as likely if not more. Obvously there seems to be something off where some code tries to access a value it believes to be a pointer but instead is simply an (integer) number (0x04). Try to locate the code crash more closely by first adding more logging around the function calls to the library to see where the offending call may be, and if that doesn't help by disabling code sections to see when it stops crashing and when it returns crashing. My first thing would be to however review every single VI that calls this library and verify that the Call Library Node exactly matches the prototype of the respective functions and doesn't pass in unallocated or to short strings and array buffers. LabVIEW certainly can cause segfaults but it is extremely rare that that happens nowadays in LabVIEW itself. But even LabVIEW developers are humans and have been known to bork up things in the past. ๐Ÿ˜€
  23. CINs have nothing to do with LabWindows CVI, aside of the fact that there was a possibility to create them in LabWindows CVI. They were the way of embedding external code in a LabVIEW VI, before LabVIEW got the Call library Node. They were based on the idea of code resources that every program on a Macintosch consisted of before Apple moved to Mac OS X. Basically any file on a Mac consisted of a data fork that contained whatever the developer decided to be the data and a resource fork that was the model after which the LabVIEW resource format was modelled. For the most part the LabVIEW resource format is almost a verbatim copy of the old Macintosh resource format. A Macintosh executable usually consisted of an almost empty data fork and a resource fork where all the compiled executable code objects where just one of many Apple defined resource types together with icons, images, (localized) string tables and custom resource types that could be anything else a developer could dream up. Usually with the exception for very well known resource types these files also contained resource descriptions (a sort type descriptor like what LabVIEW uses for its type system) as an extra resource type for all its used resource types. The idea of CINs was interesting but cumbersome to maintain as you had to use the special lvsbutil executable to put the CIN code resource into the VI file. And in my opinion they stopped short of creating a good system by only allowing one CIN code resource per CIN name. If they had instead allowed for multiple CINs to exist for a specifc name, one for each supported platform (m68k, mppc, mx86, sparc, wx86, wx64, vxwk, arm, etc) one could have created a VI that truely runs on every possible platform by putting all the necessary code resources in there. As it was, if you put a Mac 68k code resource into the VI it would be broken on a Mac PPC or on Windows system and if you put a Windows code resource in it it would be broken on the Mac. Also once the Call Library Node supported most possible datatypes, CINs basically lost every appeal unless you wanted to hide the actual code resource by obfuscating it inside the VI itself. And that was hardly an interesting feature, but was bought with lots of trouble from having to create seperate C code resources for every single CIN (shared libraries can contain hundreds of functions all neatly combined in one file) and also a maintenance nightmare if you wanted to support multiple platforms. As to the articles mentioned in the link from dadreamer, I resurrected them from the wayback engine a few years ago and you can find them on https://blog.kalbermatter.nl
  24. I first thought it may have to do with the legacy CINs but it doesn't look like that although it may be a similar idea. On second thought it actually looks like it could be the actual patch interface for the compiled object code of the VIs. It seems the actual code dispatch table that is generated during compiling of a VI. As such I doubt it is very helpful for anything unless you want to modify the generated code somehow after the fact.
  25. I'm not sure which Python interface you use but the one I have seen really just uses a shared library to access those things too and uses exactly that function too.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.