Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,795
  • Joined

  • Last visited

  • Days Won

    248

Everything posted by Rolf Kalbermatter

  1. Windows? There are probably several options: .Net, ActiveX, Windows API and command line tool Personally I would go for the comamnd line tool or Windows API aproach, but the command line tool is the easier to explain. To start a command line tool you use the System Exec VI in LabVIEW and pass it a string "cmd <command>" The command line tool you want to use for this would be "netsh" netsh interface ip set address "Local Area Connection" static 192.168.0.1 255.255.255.0 This would set the IP address of the network adapter "Local Area Connection" to static IP 192.168.0.1 with a subnet mask of 255..255.255.0 netsh has many more options and has an involved command menu structure, but you can find lots of information on the net how to use it.
  2. There are different solutions but all have the disadvantage that making them work on non desktop systems might be a lot of hassle or simply impossible. 1) Use of stunnel or a similar SSL proxy. This sits in between your application and the actual SSL enabled service. You communicate to the stunnel process/service through unencrypted protocols (so your stunnel proxy should be installed on the internal network) such as HTTP or POP/SMTP and stunnel converts that protocol to its corresponding SSL enabled protocol and vice versa. If you have the ability to install stunnel on an internal desktop PC you do not even have to try to get stunnel to work on your RT system, which I assume would be a serious problem although being open source I'm sure it could be made to compile and run there (but it uses OpenSSL and that can be very tricky to port to unsupported platforms). I've done tests with stunnel under Windows and got a simple LabVIEW HTTP VI to communicate properly to an HTTPS server by using the stunnel process as HTTP proxy with SSL encryption. I'm not sure if this is indicative to HTTPS support in general as I didn't test many HTTPS services with this. 2) Using the OpenG Pipe library you could call a tool like plink or putty and redirect its IO streams to LabVIEW, letting it perform the actual SSL encryption and handshaking. This pipe library works fairly well but has some quirks sometimes under some situation that I haven't found the time and interest to debug yet and unless I'm going to need this library in a commercial project where these quirks are an issue it's not likely that I will spend much more time on it. The current state is that it mostly works on Windows, has been prepared to work on Linux and Mac OSX but without being tested yet, and would be a challenge to port to RT systems. 3) I've been working on a Network nodes (TCP/UDP) replacement library that includes TLS/SSL support through use of the OpenSSL library. It is still Windows only and has a few problems that prevent me from making it available for broader consumption. Also it is fairly substantial work so I'm not sure about how to make it available at all. A free open source distribution sounds a bit to much like giving it away. The idea of that library would be that it basically simply replaces the native LabVIEW network nodes as drop in replacement, providing an additional parameter to initialize the SSL operation if so desired. Additional goodies would be optional IPv6 support if available on the platform and some extra functionality such as optional synchronous write operation. The current state is that it mostly works but it hangs always when trying to shutdown LabVIEW after having done one or more SSL related communication because the OpenSSL library seems to somehow hook into system calls that make loading/unloading it dynamically a big challenge. Porting this library to Linux and Mac OSX should be fairly straight forward, and porting it to RT targets albeit probably not as trivial, should be also possible.
  3. Sounds like your .Net assemblies are somehow messing badly with some system internals. I know this behaviour form standard DLLs who try to hook themselves into system calls to do some black magic. When trying to unload those DLLs with the FreeLibrary() function, Windows gets confused in some sort of deadlock situation. Seems Microsoft specifically discourages explicit unloading of DLLs on application termination, saying that this can cause dead lock situations for many API calls during such an unload call. Not sure if this applies in your case but the way you describe it it sounds somehow similar. Problem here is that you can not tell LabVIEW not to attempt to unload a .Net assembly when it wants to close down the VI that contains the last .Net Node to that assembly.
  4. Flash memory doesn't seem to have a serial number that Windows wants to provide. For USB HDs it seems also not possible to get the serial number in any way. And for SATA devices at least in my XP system there won't be any serial number either, unless I directly query the drive using ATAPI SMART but that requires admin privileges. Do you mean to say that HD Tune can give you HD serial numbers without being logged in as administrator? That would be very interesting.
  5. I would guess that this is more important than the fact that it is Windows 7. Apparently someone put some love into the File IO C code and removed a global somewhere in the Open File function that prevented that function from being called as a thread-agnostic function, so the File dialog, which has to run in the UI thread since it does UI , can't block it anymore.
  6. Actually if memory serves me right this is not the first time for The Captain to appear in the newsletter or the website. So not so sure it has much to do with the fact that he works for NI now. He's simply good in marketing, including his own ideas.
  7. Congratulation Chris! It can be sometimes tough but in the end it is one of the greatest experiences one can have. Good luck to the baby, mom and you.
  8. Trial and error, what you can throw away! It very much depends on your application so there is no right for all list. If this is to cumbersome then let it be and use installers! Also watch out when testing such setups. Most of these components are optional and LabVIEW does make a lot of effort to run without those optional methods. This means that you can start up your application fine with most of them missing, until you do exercise one of those components. For instance not having mesa.dll or some of the models files in the modesl directory will not cause your application not to startup. But as soon as you show a window using one of those elements you will see a crossed rectangle instead of the control. This is the same with many other things. Until exercised you won't notice anything. So make sure to exercise all code paths in your application when testing if it still works with runtime files removed, to avoid bad experiences for your users. As a guide-line you can go with this: - all *rsc files should be always added, also the lvrt.dll file - lvjpeg.dll is only needed if you use any jpeg functions in your app, same for lvpng.dll for png files. - serpdrv doesn't apply for LabVIEW 7 and higher anymore - mesa.dll and the models directory are only required if you use the new style (3D) controls - scripts directory is only needed if you are using one of the script nodes, but they got mostly obsoleted by Mathscript in newer versions The advanced analysis library requires from LabVIEw 7.1 on for most functions the Intel Math Kernel Library to be installed. And it will look for that library in the system registry so you basically can't avoid an installer for that. LabVIEW 8.x adds many new features such as Mathscript that are separate components and need to be installed in order to work properly and allowing LabVIEW to find them, Once you do things like LabVIEW RT targets, FPGA, DAQ(mx), VISA etc, you can simply forget to get these things on your machine without installer.
  9. image.ctl is the IMAQ Vision Image Control. And that does not have a scale at all, since that is not how images are typically used. Calibrating an image to be exact in physical measurements is a very difficult task in itself. Especially since such a scale has only any meaning for a very specific depth plane in the picture. Anything a little further away or closer by has a different scale.
  10. Unfortunately that is a functionality of the Windows shell File Dialog. On a successful file selection it seems to set the current directory for the app to the directory that selection was made in. Why in the world the MS programmers thought this to be a good idea, I do not know at all. There seems no way for an application to tell the dialog not to do that. The only LabVIEW workaround would be to keep resetting the current directory to the previous value after each dialog! possibly breaking something that was the cause for MS to add that functionality.
  11. I agree but I was not aware that the FTP VIs would use that range. After all they are from NI and I would hope they do not clash there. ( I know they clash elsewhere!) But since the VIs are in source code and without passwords, the best person to really go after that problem is the OP himself. A bit of single stepping and debugging will surely show the offending operation.
  12. Might the problem be maybe more on the sender side? I ask this because 200ms sounds a lot like the default TCP/IP Naggle algorithmus delay. But that is applied to the sender side to avoid sending lots and lots of small TCP/IP frames over the network. So reading 4 bytes and then the data might be no problem at all but trying to do the same on the sender side might be. It's also my experience that on the reading side you can usually chuck up a package in as many reads as you want (of course performance might get smaller if you do a seperate TCP/IP read for every byte, but that is besides the point). On the other hand it is usually a good idea to combine as many data as possible in one string and send it with a single TCP Write. That is at least how I usually do TCP/IP communication. Another option I have at times used is to enable the TCP_NODELAY socket option, but I have to admit I never did that on an embedded controller so far. Not even sure how to do that on a VxWorks controller as its API is not really standard.
  13. You are right, but 1xxx errors are of the LabVIEW environment type errors (think VI Server, scripting etc.) so I really wonder how that could get into FTP VIs where simple File IO is handled. I haven't looked at the FTP Vis in ages as I use my own library but maybe they use VI Server for some reentrant calls or something in there??? With FTP this could mean that the login was not successful.
  14. I've seen it too many times but it seems to be mainly limited to using the Browse dialog. Normal LabVIEW File I/O primitives usually don't seem to cause that problem. That made me believe that it is something the Browse Dialog is causing and this dialog is largely part of the OS itself. Maybe it's the way LabVIEW is using that dialog. An interesting experiment I wanted to do in ages is to use the LabVIEW file dialog instead and see if the problem persists. Problem is that I have not found a reliable way to reproduce that problem.
  15. Ohh, ohh, you got that mixed up very much mate . LabVIEW 1 was Mac only and so was LabVIEW 2. It did not look like LabWindows at all but like Macintosh. LabWindows was for DOS and its programming was in Basic and a subset of C and the graphical UI was far from what a Macintosh could do although much better than most of what could be done on DOS otherwise. The first LabVIEW version for Windows was 2.5 but it was really a preview release and more like a glorified Alpha. It's stability was ... well, nothing to write home about, but then it was also Windows 3.1 and LabVIEW was probably one of the few applications out there exercising the Windows 3.1 kernel to the limits of what MS never had imagined. The first official LabVIEW version for Windows was 3.0 followed by numerous bug fix releases and with 3.1, if memory serves right, adding SunOS support, which many years later was renamed Solaris 1 by Sun. Somewhere around 3.1 the Macintosh version was also getting back in sync with the multiplatform release, having been sold until then as 2.2.1 version in its old non-multiplattform version, with its Mac only resource format files. The UI has had quite a few changes IMO, with new objects being added regularly. Yes the basic concept hasn't much changed and I wished they had overhauled things like the custom control editor to allow a much better customization of controls. The current editor is from the beginnings of LabVIEW and simply not very intuitive in many ways. Also some of the newer controls seem not to have gotten the internal object message support to be customizable at all in that editor. If I knew this is because they support internally a different object message interface for a yet to come new custom control editor then I could more easily live with that, but I have my doubts.
  16. No I think integrating LabVIEW modules as DLL into a LabVIEW application is a fairly roundabout way of doing business. It is possible and even works most of the times, but there are gotchas. 1) The DLL interface really limits the types of parameters you can pass to the module and retrieve back. 2) There is a LabVIEW => C => LabVIEW parameter translation for all non-flat parameters (array and strings) unless you use LabVIEW native datatypes (handles) AND the DLL is in the same version as the caller => Slowing down the call. 3) When the versions don't match there is also a proxy marshalling of all data paramaters necessary, much like what ActiveX does for out of process servers (but it is not the same marshalling mechanism as in ActiveX) since the DLL and the caller really execute in two different processes => Slowing down the call. 4) The DLL can not for the sake of it communicate through other means with the caller but through its parameter interfaces or platform system resources (events, etc). The notifiers, LabVIEW events, semaphores, etc. are (maybe shared and meaningful in the case of same DLL and caller version) but certainly completely useless if the DLL is in a different LabVIEW version than the caller. There are probably a few more gotchas that I haven't thought of at the moment.
  17. You should also give more informations as to the error number and text you receive and also a bit more background in terms of LabVIEW version etc. Also post the test VI you have been executing when you get your errors. Your description of what you are doing is rather compressed and not very clear.
  18. Or they are simply to smart. You know the pink monkey experiment, don't you! Belongng to a 2% minority is like being a pink monkey especially in a society where everybody is on animal level.
  19. Actually that is not true either. Generally, compiled VIs in a x.x.1 version can be loaded into x.x.0 runtime and vice-versa and executed. It could in some corner cases give strange (visual) effects or calculation artefacts but in general it works. But before LabVIEW 8.5 if you loaded a VI that was not EXACTLY the same version into the development system, it always got recompiled automatically. That is true if you try to load the VI through VI server. It is not true if you compile those VIs into a DLL and call that DLL through the Call Library Node. If the LabVIEW version the DLL is created with does match the caller version, the VIs in that DLL are loaded into the current LabVIEW system and executed there. If the versions do not match (not sure about the bug fix version difference here), the DLL is loaded through the according runtime system and run in that way.
  20. I usually go the other side around, doing the work in the older version and test afterwards that it still works in the newer one. Of course if you have to support runtime rather than the development system, you won't be able to avoid building a runtime distribution for each of these versions. But then it is the question why won't they be able to install a newer runtime version? After all you can install several runtime versions alongside each other with no problem. In fact if you install driver software from NI, such as NI-VISA, DAQmx etc. you already have at least two to three different runtime versions installed since varous tools and utilities in there were developed in various LabVIEW versions.
  21. But if you want to use those sensors on humans you will not want to make the isolation yourself. There is basically no way you can get the necessary approvals yourself, that you can not get into trouble if a patient suddenly feels sick after having been subjected to a physical contact with the sensors isolated by your circuitry. And getting sued can be very expensive.
  22. Actually you can verify this further by using the path to string array function. You will see that the first element will be \\server\share as unintuitive that may seem. A quick test in LabVIEW 6 showed that the strip path for such a path retuns Not A Path and an empty name, but doesn't crash. So it seems someone has worked on that functionality to be a bit more consistent but might have messed up something, or used a technique that got troubles with later LabVIEW memory optimizations.
  23. Actually I would conquer that this last one is in principle an invalid operation. A path should most probably point to a valid file system location and the server name alone is not such a thing. You can not, even on Windows, take a server name only and use any file IO operations on it. You have for instance to use special network APIs to enumerate the contents of a domain/workgroup or a server. LabVIEW never has so far taken the file IO functions to support directly network resources, most probably since that is quite a flaky thing to do under the various versions of Windows. I once wrote a "List Network Resources" CIN long ago, that was meant to provide the same functionality for network domains and servers as the native List Directory does for file systems and had all kinds of nasty problems, one of them being that the LabVIEW path type isn't exactly suited to represent such a thing in an easy way. Of course the Strip Path should definitly not crash on such an operation, but for the rest I would think it operates properly by not returning a valid stripped path for that resource.
  24. And that is where the problem starts. There are miriads of embedded developer toolkits with all their own choice of a longer or not so long list of hardware boards. ARM CPUs from NXP, Intel, Atmel, TI, Samsung, Motorola, Cirrus Logic, etc., Freescale's Coldfire and PowerPC CPUs, MIPS, Atmel AVR32, National Semiconductor, Hitachi SuperH, and each of these CPUs has their own on chip hardware selection of AI/AO, DIO, timers, ethernet, USB , serial, SPI, I2C, CAN, JTAG, display interfaces, security engines, etc. with very varying register programming interface even for the same functionality, not to forget about external components on the development board that extend the variation even more. Even if NI would license VxWorks or a similar OS for some of these CPU platforms (which they in fact do since the Embedded Toolkit makes use of the RT kernel that comes with the Keil Tools), this does still mean that they do not have board level drivers for all the possible hardware that is out there, not to speak about modifications that you might want to do to the development kit hardware for your own product such as replacing a 2 line display with a 12 line display. Such a change may seem trivial but often it involves not just the change of a variable somewhere but a completely different register set to be initialized and programmed. So I do not think that you can get much more out of the box currently How much LabVIEW embedded really solves a market demand is a different question. It can not possible guarantee you a LabVIEW only experience once you want to change even little things in the hardware design of the developer board that came with your kit, and that is what embedded design is often about. I doubt that many use the original developer board 1:1 in an end user product, so where I see its benefit is in prototyping and possible "one or a few of a kind" test scenarios where you can work with the hardware as it comes in the box, or at most only need to make very little changes to its external peripherial to reduce the work on C level to a minimum. While NI is selling the Embedded Toolkit as a LabVIEW product they make AFAIK no claims that you do not have to go to the C level once you start to make changes to the hardware, and even into the toolchain level if you want to adabt it to your own CPU and/or platform But for those a cRIO system would seem more beneficial to me. It's extra cost is not really an issue if you only are going to build one or a few of those systems.
  25. Look for the property FrameNames of the CaseSel class. This is an array of strings much like the Strings[] property for enums.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.