Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. Don't get me started on Java Script. Maybe if you absolutely want to program a Discord bot for some reason, you can't really get around it. And if you want to botch your HTML documents you can use that too. But other than that Java Script uses the syntax of Java with the logic that you had with Basic. Not even one single provision to try to prevent the programmer from writing absolutely bad code, just like what we had with Basic. Community Edition is already a subscription model technically. Just with a subscription fee of $0 and the limitation that you are not allowed to use it for anything that earns you money. It expires at the end of the year and you have to reactivate it. If you use it to keep up to date with your LabVIEW knowledge that is fine, if you create open source free libraries to distribute, that is fine too. If you automate your door bell, weather station or model train layout at home, that is also allowed. But creating a Toolkit that you sell, or teaching a course for which you get paid is not a valid use case for the Community Edition. Same about writing a test application for your or someone else's company that tests some products that are then sold. An interesting corner case would be to create tutorials for your youtobe channel. If that youtobe channel ends up being watched a lot and earning you more than a few (as in some two digits, just a number I came up with but I'm sure is about what NI would consider still valid) bucks per year, you are violating the Community Edition license conditions.
  2. I agree with all you said, but don't see Python as a viable competitor to LabVIEW. It's getting ubiquitous in many ways, being on every IoT device nowadays, having libraries for just about anything you can think off, even more than Java ever had. But it's still interpreted (yes I know about JIT but that is besides the point), and I really hate to do UI programming by scripts, otherwise I might have stayed with LabWindows CVI. And while crunching large data sets in LabVIEW can be challenging due to its data flow forced data copies, trying to do the same in Python is generally causing me to fall asleep or check if the computer hasn't frozen up. And it can be a challenge to find the pearls in the many Python libraries. A lot of what is posted out there is mediocre and shows that the person having done it didn't even understand what they were doing, but for some miraculous reasons managed to get to a point where it did something. If anything I see Python mainly as a competition to .Net. While not the same in many ways they are much closer to each other than any of them is to LabVIEW. The advantage (and disadvantage) of .Net is that Microsoft is behind it. They are powerful enough to make or break a platform but at the same time also do not want to lose control of it, which has caused some strange acrobatic decisions in the past about announcing to open source it, but not quite doing it anyways.
  3. Yes, built applications will continue to work like they did so far. This means for the LabVIEW runtime you can install it and your executable on any computer you want and run it. Eventual Runtime licenses such as for IMAQ Vision etc, will still be per machine licenses and not leases, just as they are now. The Realtime and FPGA Module will be also a software lease but you only need them to write and build the compiled code, after that you can run them on any licensed hardware just like you do now. Deployment will of course only work from a licensed LabVIEW Development system with activated Realtime Module but once it is deployed, the target can run for as long as you want without limitations by a software lease. If they ever would change that that will be the day our company will definitely stop to use LabVIEW. We can not have a machine stop execution because the NI license server takes some vacation or some software lease has expired. The Debug Deployment licenses of the different NI software products will also stay as perpetual licenses. They are in principle the same as the Development system but you only have the right to use them to debug an application, not to write and build it. They are meant for the computers on the factory floor where you want to run your built application but may need to be able to also execute the VIs in source code to debug problems on the real hardware instead of on your development machine.
  4. You still have the problem of the embedded variant in your datatype. JSON is inherently string format with some indications to make a distinction between numbers and strings but nothing more. A variant is an AnyType. What you try to do is: Convert this string (which from the JSON syntax only can be inferred to be a numeric, object, list or string, into Anything! Anything what? The JSON library could elect to convert strings into a string, numerics into a double and anything else into nothing, but that will break down as soon as you want to use that variant as a specific type as you pushed it into the incoming variant in the begin. The JSON string simply lacks every information about the actually wanted type in the variant and therefore you can not convert it into a variant without loosing significant compatibility and functionality. This is a fundamental problem that can not be solved with throwing more code at it, but only by rethinking the way you want to save your configuration data.
  5. Me neither, and I can think of no use case at all for something like that. LabVIEW <-> Java bindings are not straightforward and I can not imagine anyone wanting to make that effort for something like log4j, or even a more complex Java module somehow using log4j.
  6. You can always dream 😀. But what would be the incentive to lower prices once everyone who is still wanting to use that platform is locked in? A competitor maybe, but who would that be? There is nobody to be seen, unless you consider jumping the ship completely and go with other solutions, even Python maybe, which is nice but not an option for a lot of work done with LabVIEW nowadays.
  7. Well, it depends a little on the duration of use. With the perpetual model you paid a hefty initial price and then a yearly SSP for as long as you wanted to be able to use newer LabVIEW versions, have access to technical support (which for a few years has been next to useless but it seems it has improved considerably in the last year). With the subscription model you pay a lower initial price but a higher early subscription price than what the SSP used to be. If you have a current SSP you can initially transition to a subscription license for the cost of your current SSP (and lock that in for to up to three years) but after that is over, you pay significantly more per year than what you did with the SSP. It seems the break even point is at about 4 years. If you intend to use LabVIEW for less than that you pay more with the perpetual model. After that the total cost of the subscription gets higher than with a perpetual license and yearly SSP payments. In a way this sounds like NI is making incentives to use LabVIEW more on pure project base and after the project is done forget about it. The software lease costs on average about double of what the SSP cost and the perpetual license without the included 1st year of SSP about the double of what the software lease costs. But you can't buy a perpetual license without at least 1 year of SSP.
  8. Actually it is! If you stop paying your SSP fees you can still continue to use the LabVIEW version that was current when you stopped paying. With the subscription model if you stop paying, any software version will stop working at the end of your subscription term. No loading up an old program to fix this little bug that would anyhow be to much of a hassle to port to the greatest and latest LabVIEW version. Any LabVIEW version you have installed simply stops working. You could of course install LabVIEW with the Community Edition license then, but that violates the license agreement you entered when signing up for the Community Edition if your program is used in any professional capacity. And the Community Edition does not include things like RT, FPGA, and just about every other thing with a paid license including the Application Builder. And if NI decides to shutdown their license server altogether, or for certain older versions of LabVIEW you are equally hampered. It's unlikely that they will let you activate LabVIEW 2009 indefinitely under a subscription model. I'm not even sure if you can activate older versions if you sign up for a subscription model now. Yes it is how every software provider is heading nowadays. Greater revenues and user lockin are tempting. Once they got a user, be it Office 365, Altium and now LabVIEW the user has only the choice to keep paying or stop using the software platform altogether with all the hassles of trying to port existing solutions to a different platform which simply does the same anyhow. So the challenge is to get people to sign up and start using it, after that it is a safe business that is not so easily going away unless you totally start to f*ck them. Rising prices? If you do it in regular small steps, except for new users, you are not likely to lose many users! Nobody wants to end up with Office documents that he can't open anymore!
  9. If you are in the NI Partner program you should have received an email yesterday. If not you will need to wait a little more or ask anyone in your company who signed up to the NI Partner program about this. The "like" in the message from Gleichmann may have been meant in a sarcastic tone. For most NI partners it should have little impact directly as they normally have a LabVIEW lease anyhow.
  10. Thanks for the feedback. It's very much as I suspected. Network communication in LabVIEW is not really that complicated. In fact it's easier than doing the same in C. The problem is when you have to do higher level protocols such as HTTP or even more specialistic. Here you have the choice to implement the underlying protocol entirely in LabVIEW, which tends to be a major project in itself or to rely on some intermediate library like the LabVIEW HTTP Client Library which is in essence simply a thin wrapper around libcurl. The first is a lot of effort since HTTP is quite a complex protocol in itself with many variants depending on server version, authentication level and such. The second hides most of those details entirely to the point where you can't access them anymore. As a point in case I recently had to do some X Windows configuration for a RT target. Possible options: - call the xrandr command line interface tool - call the X11lib shared library to do the relevant things directly - call the xcb shared library instead to do the relevant things - implement the X protocol directly in LabVIEW I ended up using option 1, simply because it was the quickest but just for lolz I also tried the last option and got some experimental code running. Now the X Windows protocol is extensive and it would be a really serious effort to make something that is reasonably functional. Another complicated fact is the authentication because that always involves some more or less obscure cryptographic functions. Even the xcb shared library, while implementing everything else from scratch (and it is also nowadays normally used as the backend for X11lib) relies on the original auth code from X11lib for this functionality rather than trying to reimplement it itself.
  11. Microsoft sells three levels of Windows IoT Enterprise: Entry Level, Value Level and High Level. They supposedly have to do with the CPU performance, so I guess number of supported cores and maybe other things like hyperthreading, amount of supported memory etc. The exact details seem only available under NDA.
  12. It depends what Windows IoT you are wanting to use. There are 2 different versions I believe. - IoT Core, only supports UWP applications, no Win32 applications. Also used on ARM platforms. LabVIEW as a x86/64 Win32 application can NOT run on this at all, independent if you try the IDE or built EXEs. - IoT Enterprise, full Windows platform with streamlined distribution and settings, only available for x86/64 hardware, LabVIEW applications can run on this
  13. Possibly the fact that when you install VAS (Vision Acquisition Software) you get a pretty small subset of VIs from the VDM (Vision Development Module) also installed. I'm not sure if that does include the IMAQ Vision control, but maybe it doesn't.
  14. Despite having created LabPython about two decades ago, I always prefered to go with C. LabPython (and Python itself too) are also written in pure C. One reason I think is that Python is also a high level programming language like LabVIEW. What I could do in Python I could also always do in LabVIEW, but certain things are simply not really possible (or at least not with reasonable effort) in both of them and require a lower level language. But C(++) is quite a different thing to work in for sure. It gives great power and control but that comes with great responsibilities too. While you have to really try hard to crash a LabVIEW or Python program, it's a matter of seconds to do that in C(++). This means programming in C is a little different than doing the same in LabVIEW or Python. If something goes wrong in your C program or library it is often not just an error code that is returned, but your test program simply dies without any warnings, questions or dialogs if you would maybe like to save your intermediate data results. In LabVIEW you get typically an error cluster out, look at it, determine where the problem is, fix it and start your program again, without any need to completely start LabVIEW itself again or sometimes even restart the whole computer just to be sure. Once you are used to that, it is not really much different anymore, but it is certainly something to be aware of before making the decision.
  15. We have noticed in the last few years that the outstanding support from NI technical support quickly detorated to the level of standard untrained technical support that call centers located in some low income countries often provide. However I have to say that this trend seems to have been reversed in recent times. I had three technical support questions in the course of about one year now, non was standard and included things that were simply not possible because the feature was officially not present. The support people were very helpful in trying to find workarounds and in two cases provided even solutions that were based on information that was gained directly from the product owners and developers to access the feature through direct behind the scene configuration files and APIs. In both cases with the strong warning that this was not the officially sanctioned way to do things and that there was a real chance that it may break in future versions, but that it was at the moment the best that could be done.
  16. It's still the same. You can not have multiple tasks accessing the same DAQ card for Analog input. You need to combine the channels into one task and one scan rate and then pick out the data as needed and distribute it to the different subsystems as needed.
  17. Most likely that compile worker is a 32-bit application and you only have 64-bit libgdiplus installed? Or another possibility, the /usr/local/natinst/mono/lib64 directory was not properly added to the /etc/ld.so.conf file and/or ldconfig was not executed afterwards
  18. Couldn't view your VI as I don't have LabVIEW 2020 installed on this machine but you want to basically detect that the boolean you use to find out if you want to write to the file has switched from false to true. An edge detector is very simple to do with a feedback register like this. Change the boolean inputs inversion to detect the falling edge. When the rising edge boolean is true, write your header and then the data, otherwise only write the data.
  19. It's not entirely without its own pitfalls. Different network controller chips support different support for jumbo frame, some older do not support it at all, and all your routers, hubs and whatever in between needs to support it too. It's definitely helpful for full HD and higher resolution cameras and/or when you need to use multiple GigE cameras in parallel, but it makes your setup more sensible to quickly switching to a different ethernet port or replacing intermediate network infrastructure suddenly causing issues.
  20. As mentioned, the cvirte.dll is not version specific. The CVI developers apparently tried to avoid that version incompatibility shenigan. So a newer CVI runtime engine install should work. There is a small chance of a version compatibility bug in a much never version, but as long as it works, it works. 😀
  21. The private method is also available in earlier LabVIEW versions, although not as a VI in vi.lib. I checked for it in 2018 and 8.6 but suspect that it may have been present as early as 8.2 0r even 8.0.
  22. Well I started in April 1992, went to the US for 4 months in May and heard there that there was this big news about LabVIEW not being for Macintosh only anymore, but telling anyone outside of the company would be equivalent to asking to be terminated 😀. They were VERY secretive about this and nobody outside the company was supposed to know until the big release event. In fall of 1992 LabVIEW for Windows 3.1 was announced and the first version shipped was 2.5. It was quickly followed by 2.5.1 which ironed out some nasty bugs and then there was a 2.5.2 release later on that made everything more stable, before they went to go to release the 3.0 version which was the first one to be really multiplatform. 2.2.1 was the last Mac version before that and 2.5.2 was the Windows version. They could not read each others files. This was Windows 3.1 which was 16-bit and still just a graphical shell on top of DOS. LabVIEW used the DOS/4GW DOS Extender from Tenberry Software, that was part of the Watcom C development environment used to compile LabVIEW for Windows to provide a flat 32-bit memory model to the LabVIEW process, without nasty segment:offset memory addressing. It was also the reason that interfacing to Windows DLLs was quite a chore because of the difference in memory model between the LabVIEW environment and the underlying OS and DLLs. Only when LabVIEW was available for true 32-bit environments like Windows 95 and NT, did that barrier go away. NI was at that time still predominantly a GPIB hardware company. A significant part of support was for customers trying to get the various GPIB boards installed on their computers and you had at that time more very different computers architectures than you could count on both hands and feet. There was of course the Macintosh and the IBM compatible computers, with all of them running DOS which Windows computers still were. Then you had the "real" IBM computers who had abandoned the ISA bus in favor of their own, more closed down Microchannel bus and also were starting to run OS/2 rather than Windows and about a dozen different Unix based workstations all with their totally incompatible Unix variant. And then even more exotic beasts like DEC VAX computers with their own expansion slots. Supporting those things was often a nightmare as there was literally nobody knowing how these beasts worked except the software driver developer in Austin and the customers IT administrator. NI had just entered the data acquisition marked and was battling against more established manufacturers like Keithley, Data Translation, and some other small scale speciality market providers. The turning point was likely when NI started to create their own ASICS which allowed them to produce much smaller, cheaper and more performant hardware at the fraction of the cost their competitors had to pay to build their own products and still selling them at a premium as they also provided the full software support with drivers and everything for their own popular software solutions. With other manufacturers you usually had to buy the various drivers, especially for NI software, as an extra and some of them just had taken the blueprints of the hardware and copied them and blatantly told their customers to request the software drivers from their competitor as the hardware was register for register compatible with theirs. The NI ASICS made copying of hardware by others pretty much impossible so NI was never concerned about making their drivers available for free.
  23. When I started at NI Switzerland in 1992, things were indeed very different. For 4 months I went to Austin and I would get technical support calls that I of course had no idea how to solve. But we could walk up one floor or two and talk directly with the software or hardware engineers that were responsible for the product in question. As NI grew, this support model wasn't quite supportable anymore. Engineers still usually started out in support and often moved rather sooner than later to another position and walking up to the developers wasn't as simple as they weren't always just one floor higher but in a different building. Support still was handled by inside engineers though, usually with a tiered level, first support was handled by first line supporters who would respond to the standard problems, and if it got to complicated it moved up to 2nd or 3rd line support. Then it was outsourced to some extend to telephone support somewhere in Costa Rica or wherever and from then on it was often pretty much impossible to get any meaningful response. The answers we would get were sometimes so abysmal that I could have asked my son and would have gotten more meaningful information and it was often impossible to explain to them the problem, as they understood nothing but what their onscreen step for step problem solving tutorials told them. Then a few years back, like 2 or 3, NI recognized that this was not really working and changed to a more professional support infrastructure with a dedicated Technical Support Engineering model that actually deserved that name. If someone has a support contract or maintenance software contract, then this support works again very well, although in comparison to 25 years ago, it is practically impossible to get non official solutions that are just gathered by the support person from walking up to the actual developer who would throw together a quick (and sometimes dirty) solution to achieve the goal. Things are much more formalized, and unless someone is from a huge multi-million $ account, it's impossible to get a bugfix release of software or something like that, before the official release.
  24. LabVIEW realtime and LabVIEW FPGA are pure Windows (32-bit only before 2021) extensions. The Linux downloads you found are for the Xilinx toolchain. You can install them under Linux, then setup your Windows FPGA compile interface to interface to them just as it does when you use the NI cloud compile solution.
  25. ISO8601 just as it's semi-sibling RFC3339 also supports an optional timezone offset identifier. Mr. Powels library deals with that and you should probably use his library. Basically if it is not present you need to treat the timestamp as being in local timezone (which can be tricky if interpreting data that was acquired somewhere else). Local Timezone offset means using the -1 indicator for the DST value in the cluster, and yes LabVIEW still needs to learn to listen to the OS message WM_SETTINGCHANGE that indicates among many other things a date/time attribute change as requesting that information every time from the OS when it deals with time conversions would be to costly.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.