Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,931
  • Joined

  • Last visited

  • Days Won

    272

Everything posted by Rolf Kalbermatter

  1. If you talk about patterns, then this follows the factory pattern. The parent class here is the Interface, the child classes are the specific implementations and yes you can of course not really invoke methods that are only present in one of the child classes as you only ever call the parent (interface) class. Theoretically you might try to cast the parent class to a specific class and then invoke a class specific property or method on them, but this doesn't work in this specific case, since that would load the specific class explicitly and break the code when you try to execute it in the other LabVIEW version than the specific class you try to cast to.
  2. I'm also seeing it in Chrome on Windows when not logged in.
  3. Tortoise SVN (+command line tools for a few simple LabVIEW tools) both at work as well as on a private Synology NAS at home.
  4. Well Windows IoT must be based on Windows RT or its successor, as the typical IoT devices do not use an Intel x86 CPU, but usually an ARM or some similar CPU. And looking at the Windows IoT page it says: Windows 10 IoT Core supports hundreds of devices running on ARM or x86/x64 architectures. Now I don't think they can limit that one to MS Store app installs only, so they must not use that restriction on IoT, but technically it would seem to be based on the same .Net centric kernel than Windows RT. And in order to provide the taunted write once and run on all of them, they will push the creation of .Net IL assemblies rather than any native binary code, Maybe it's not even possible to use native binary code for this platform. At some point they did promise an x86 emulator for the Windows RT platform (supposedly slated for a Windows RT 8.1 update) in order to lessen the pain of a very limited offering in the App Store, but I don't think that ever really materialized. CPU architecture emulation has been many times tried and while it can work, it never was a real success, except for the 68k emulator in the PPC Macs, which worked amazingly well for most applications that didn't make use of dirty tricks.
  5. Windows RT and Windows Embedded are two very different animals. Windows Embedded is a more modular configurable version of Windows x86 for Desktop while Windows RT is a .Net based system that does contain a kernel specifically designed for a .Net system, without any of the normal Win32 API interface. Windows Embedded only runs on x86 hardware ,while Windows RT can run on ARM and potentially other embedded CPU hardware. On the other hand Windows Embedded can run .Net CLR applications and x86 native applications, while Windows RT only runs .Net CLR applications. RT here doesn't stand for RealTime but is assumed to refer to the WinRT system, which basically uses the .Net intermediate code representation for all user space applications to isolate application code form the CPU specific components. Windows RT officially was only released for Windows 8 and 8.1 but the new Windows S system that they plan to release, seems to be build on the same principle, strictly limiting application installation from Microsoft Store and only for apps that are fully .Net CLR compatible, meaning they can't really include CPU native binary components. These limitations make it hard for anyone to release hardware devices based on this architecture as only Microsoft Store apps are supported. But I think NI might be big enough to negotiate a special deal with Microsoft for a customized version that can install applications from an NI App Store . Perfect monetization! However with the track record Microsoft has with Windows CE, Phone, Mobile, RT and now S, which all but S (which still has to be released yet) were basically discontinued after some time, I would like to think that NI is very weary of betting on such an approach. For the current Realtime platforms NI sells I would guess that ARM support is still pretty important for the lower cost hardware units, so use of Windows Embedded alone is not really feasible If NI could use the Windows RT based approach for those units, they might get away with implementing an LLVM to .Net IL code backend for those, but since Windows RT is already kind of dead again, that is not going to happen.Therefore I guess NI Linux RT is not going away anytime soon. Yes the new WebVI technology based on HTML5 is likely the horse NI is betting on for future cross platform UI applications that will also run on Android, iOS and other platforms with a working web browser. Development however is likely going to be Windows only for that for a long time.
  6. No, .Net Core is open source, but .Net is quite a different story. And the difference is akin to saying that MacOS X is open source, because the underlying Mach kernel is BSD licensed!
  7. That would mean to trash NI Linux RT and go with a special variant of Windows RT for RT (sic). I'm not yet sure that NI is prepared to trash NI Linux RT and introduce yet another RT platform. But stranger things have happened.
  8. There was some assurance in the past that classic LabVIEW will remain a fully supported product for 10 years after the first release of NXG. Having 1.0 being released this NI Week would end LabVIEW classic support with LabVIEW 2026. And yes .Net is a pretty heavy part in the new UI of LabVIEW NXG (the whole backend with code compiler and execution support is for the most part the same as in current LabVIEW classic). Supposedly this makes integrating .Net controls a lot easier, but it makes my hopes for a non-Windows version of LabVIEW NXG go down the drain. Sure they will have to support RT development in LabVIEW NXG, which means porting the whole host part of LabVIEW RT to 64 bit too, but I doubt it will support more than a very basic UI like nowadays on the cRIOs with onboard video capability. Full .Net support on platforms like Linux, Mac, iOS and Android is most likely going to be a wet dream forever, despite the open source .Net Core initiative of Microsoft (another example of "if you can't beat them, embrace and isolate them")
  9. Definitely! Accessing directly the Sharepoint SQL Server database is a deadly sin that puts your entire Sharepoint solution immediately into fully unsupported mode AFA Microsoft is concerned, even if you only do queries.
  10. You usually need to scroll down in the list until you find issues that have at least one valid resolution option. Higher level conflicts that depend on lower level conflicts can't be resolved before their lower level conflicts are resolved.
  11. Doesn't need to. The LabVIEW project is only one of several places which stores the location of the PPL. Each VI using a function from a PPL stores its entire path too and will then see a conflict when the VI is loaded inside a project, while the project has this same PPL name in another location present. There is no other trivial way to fix that, than to go through the resolve conflict dialog and confirm for each conflict from where the VI should be loaded from now on. Old LabVIEW versions (way before PPLs even existed) did not do such path restrictive loading and if a VI with the wanted name already was loaded, did happily relink to that VI, which could get you easily into very nasty cross linking issues, with little or no indication that this had happened. The result was often a completely messed up application if you accidentally confirmed the save dialog when you closed the VI. The solution was to only link to a subVI if it was found at the same location that it was when that VI was saved. With PPLs this got more complicated and they choose to select the most restrictive modus for relinking, in order to prevent inadvertently cross linking your VI libraries. The alternative would be that if you have two libraries with the same name on different locations you could end up with loading some VIs from one of them and others from the other library, creating potentially a total mess.
  12. Unfortunately, 27 kudos is very little! Many of the ideas that got implemented had at least 400 and even that doesn't guarantee at all that something gets implemented.
  13. That's of course another possibility but the NI Syslog Library works well enough for us. It doesn't plug directly into the Linux syslog but that is not a big problem in our case. It depends. In a production environment it can be pretty handy to have a life view of all the log messages, especially if you end up having multiple cRIOs all over the place which interact with each other. But it is always a tricky decision between logging as much as possible and then not seeing the needle in the haystack or to limit logging and possibly miss the most important event that shows where things go wrong. With a life viewer you get a quick overview but if you log a lot it will be usually not very useful and you need to look at the saved log file anyhow afterwards to analyse the whole operation. Generally, once debugging is done and the debug message generation has been disabled, a life viewer is very handy to get an overall overview of the system, where only very important system messages and errors get logged anymore.
  14. Well as far as the Syslog functionality itself is concerned, we simply make use of the NI System Engineering provided library that you can download through VIPM. It is a pure LabVIEW VI library using the UDP functions and that should work on all systems. As to having a system console on Linux there are many ways for that which Linux comes actually with, so I'm not sure why it couldn't be done. The problem under Linux is not that there are none, but rather that there are so many different solutions that NI maybe decided to not use any specific one, as Unix users can be pretty particular what they want to use and easily find everything else simply useless.
  15. We don't use Veristand, but we definitely use syslog in our RT applications quite extensively. In fact we use a small Logger class library that implements either file or syslog logging. I'm not sure what you would consider a pain to have such a solution working in VeriStand though. Somewhere during your initialization you configure and enable the syslog (or filelog) and then you simply have a Logger VI that you can drop in anywhere you want. Ours is a polymorphic VI with one version acting as a replacement for the General Error Handler.vi and the other being for simply reporting random messages to the logging engine. After that you can use any of the various syslog viewer applications to have a life update of the messages on your development computer or anywhere else on the local network.
  16. That sounds a bit optimistic considering that all major web browsers nowadays disable Flash by default and some have definite plans to remove it altogether. Similar about the Silverlight plugin, which Microsoft has stopped to develop years ago already and support is marginal today (security fixes).
  17. That is not entirely true, depending on your more or less strict definition of a garbage collector. You are correct that LabVIEW does allocate and deallocate memory blocks explicitly, rather than just depending on a garbage collector to scan all the memory objects periodically and determine what can be deallocated. However LabVIEW does some sort of memory retention on the diagram where blocks are not automatically deallocated whenever they are going out of scope, because they can be then simply reused on the next iteration of loops or for the next run of the VI. And there is also some sort of low level memory management where LabVIEW doesn't usually return memory to the system heap whenever it is released inside LabVIEW but instead holds onto it for future memory requests. However this part has been changed several times in the history of LabVIEW, with early versions having a very elaborate memory manager scheme built in, at some point even using a third party memory manager called Great Circle, in order to improve on the rather simplistic memory management scheme of Windows 3.1 (and MacOS Classic) and also to allow much more fine grained debugging options for memory usage. More recent versions of LabVIEW have shed much of these layers and rely much more on the memory management capabilities of the underlying host platform. For good reasons! Creating a good, performant and most importantly flawless memory manager is an entire art in itself.
  18. I have recently resurrected these articles under https://blog.kalbermatter.nl
  19. That's the status return value of the viRead() function and is meant as a warning "The number of bytes transferred is equal to the requested input count. More data might be available.". And as you can see, viRead() is called for the session COM12 and with a request for 0 bytes, so something is not quite setup right, since a read for 0 bytes is pretty much a "no operation".
  20. Then he would drown
  21. Something about the __int64 sounds very wrong! In fact the definition of the structure should really be like this with the #pragma pack() statements replaced with the correct LabVIEW header files. #include "extcode.h" // Some stuff #include "lv_prolog.h" typedef struct { int32 dimSize; double elt[1]; } TD2; typedef TD2 **TD2Hdl; typedef struct { TD2Hdl elt[1]; } TD1; #include "lv_epilog.h" // Remaining code This is because on 32-bit LabVIEW for Windows, structures are packed, but on 64-bit LabVIEW for Windows, they are not. The "lv_prolog.h" file sets the correct packing instruction depending on the platform as defined in "platdefines.h" which is included inside "extcode.h". The __int64 only seems to solve the problem, but by accident. It works by the virtue of LabVIEW only using the lower 32 bits of that number anyway and the fact that x86 CPUs are little endian, so the lower 32-bit of the int64 also happen to be in the same location as the full 32-bit value LabVIEW really expects. But it will go wrong catastrophically if you ever try to compile this code for 32-bit LabVIEW. And if you call any of the LabVIEW manager function defined in "extcode.h" such as the NumericArrayResize() you will also need to link your project with labview.lib (or labviewv.lib for the 32-bit case) inside the cintools directory. As long as you only use datatypes and macros from "extcode.h", this doesn't apply though.
  22. #pragma pack(push,1) typedef struct { int dimSize; double elt[1]; } TD2; typedef TD2 **TD2Hdl; typedef struct { TD2Hdl elt1; } TD1; #pragma pack(pop) extern "C" __declspec(dllexport) void pointertest(TD1 *arg1); MgErr pointertest(TD1 *arg1) { if (!arg1->elt1 || (*arg1->elt1)->dimSize < 2) return mgArgErr; (*arg1->elt1)->elt[0] = 3.1; (*arg1->elt1)->elt[1] = 4.2; } Defensive programming would use at least this extra code. Note the extra test that the handle is not NULL before testing the dimSize, since the array handle itself can be legitimately NULL, if you happen to assign an empty array to it on the diagram Altneratively you should really make sure to properly resize the array with LabVIEW manager functions before attempting to write into them, just as ned mentioned: MgErr pointertest(TD1 *arg1) { MgErr err = NumericArrayResize(fD, 1, (UHandle*)&arg1->elt1, 2); if (err == noErr) { (*arg1->elt1)->elt[0] = 3.1; (*arg1->elt1)->elt[1] = 4.2; } return err; }
  23. I'm afraid your conclusion is very true, especially if you only plan to build this one system. It would be probably a different situation if you had to build a few dozen, but that is not how this usually works.
  24. The IMAQ datatype is a special thing. It is in fact a refnum that "refers" to a memory location that holds the entire image information, and that is not just the pixeldata itself but also additional information such as ROI, scaling, calibration, etc. Just as passing a file refnum to a file function does not pass a copy of the file to the function to operate on, so does passing an IMAQ refnum not create a copy of the data. It at most creates a copy of the refnum (and increments an internal refcount in the actual image data structure). The IMAQ control does the same. It increases the refcount so the image stays in memory, and decreases the refcount for the previous image when another IMAQ refnum is written into the control. And there is a good reason that NI decided to use a refnum type for images. If it would operate on them by value just as with other wire data, you would be pretty hard pressured to process even moderately sized images on a normal computer. And it would get terribly slow too, if at every wire branching, LabIEW would start to create a new by value image and copy all the potentially 100MB and more data from the original image into that copy. And if you wire a true constant to the destroy all? input on the IMAQ Destroy function this simply tells IMAQ to actually destroy any and every image that is currently allocated by IMAQ. And if you do that you can in fact save yourself the trouble of calling this function in a loop multiple times to destroy each IMAQ refnum individually. But yes, it will basically destroy any and every IMAQ refnum currently in memory, so there is no surprise that your IMAQ control suddenly turns blank as the image it displays is yanked out of memory under its feet. And why would they have added this option to the IMAQ Destroy? Well, it's pretty usual to create temporary images during image analysis functions and give them a specific name. If they don't exist they will be created and once they are in memory they will be looked up by their name and reused. So you don't typically want to destroy them after every image analysis round but just let them hang around in memory to be reused in the next execution of the analysis routine. But then to properly destroy them at the end of the application, you would have to store them in some queue or buffer somewhere to refer to them just before exiting and pass that refnum explicitly to the IMAQ Destroy function. Instead you can simply call IMAQ Destroy with that boolean set to true, to destroy any IMAQ refnums that were left lingering around.
  25. There is a reason the NI interfaces are so expensive. You need to be a member of the Profibus International group to receive all the necessary information and be allowed to sell products which claim to be Profibus compatible. And that costs a yearly fee. While the hardware is indeed based on an RS-485 physical layer there are specific provisions in the master hardware that must guarantee certain things like proper failure handling and correct protocol timing. There have been two Open Source projects that tried to implement a Profibus master implementation. One is the pbmaster project which seems to have completely disappeared from the net and was a Linux based driver library to run with cheap RS-232 to 485 converter interfaces or specific serial controller interface chips. I suppose with enough effort there is a chance that one might be able to get this to work on a NI Linux based cRIO, but it won't be trivial. The main part of this project was a kernel device driver with a hardware specific component that did directly interface to the serial port chip. To get this to interface to a normal RS-485 interface on the cRIO (either as a C module or through the built in RS-485 interface that some higher end cRIOs have, would require some tinkering with the C sources for sure. The other project is ProfiM on sourceforge which seems to have been more or less abandoned since 2004 with the exception of an update in 2009 which added a win2k/xp device driver. This project is however very Windows specific and there is no chance to adapt this to a cRIO without more or less a complete rewrite of the software. Unfortunately this is about as far as it seems to go for cheap Profibus support. While the binary protocol for the Profibus is actually documented and you can download the specs for it, or study the source code of these two projects to get an idea, the Profibus protocol timing is critical enough that it will be difficult to simulate with a purely user space based implementation such as using VISA to interface to a standard interface. Certain aspects of the protocol almost certainly need to be implemented in the kernel space to work reliably enough, or another alternative would be to implement the Profibus protocol on the FPGA in the cRIO, but that is also a major development effort.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.