Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. I usually go the other side around, doing the work in the older version and test afterwards that it still works in the newer one. Of course if you have to support runtime rather than the development system, you won't be able to avoid building a runtime distribution for each of these versions. But then it is the question why won't they be able to install a newer runtime version? After all you can install several runtime versions alongside each other with no problem. In fact if you install driver software from NI, such as NI-VISA, DAQmx etc. you already have at least two to three different runtime versions installed since varous tools and utilities in there were developed in various LabVIEW versions.
  2. But if you want to use those sensors on humans you will not want to make the isolation yourself. There is basically no way you can get the necessary approvals yourself, that you can not get into trouble if a patient suddenly feels sick after having been subjected to a physical contact with the sensors isolated by your circuitry. And getting sued can be very expensive.
  3. Actually you can verify this further by using the path to string array function. You will see that the first element will be \\server\share as unintuitive that may seem. A quick test in LabVIEW 6 showed that the strip path for such a path retuns Not A Path and an empty name, but doesn't crash. So it seems someone has worked on that functionality to be a bit more consistent but might have messed up something, or used a technique that got troubles with later LabVIEW memory optimizations.
  4. Actually I would conquer that this last one is in principle an invalid operation. A path should most probably point to a valid file system location and the server name alone is not such a thing. You can not, even on Windows, take a server name only and use any file IO operations on it. You have for instance to use special network APIs to enumerate the contents of a domain/workgroup or a server. LabVIEW never has so far taken the file IO functions to support directly network resources, most probably since that is quite a flaky thing to do under the various versions of Windows. I once wrote a "List Network Resources" CIN long ago, that was meant to provide the same functionality for network domains and servers as the native List Directory does for file systems and had all kinds of nasty problems, one of them being that the LabVIEW path type isn't exactly suited to represent such a thing in an easy way. Of course the Strip Path should definitly not crash on such an operation, but for the rest I would think it operates properly by not returning a valid stripped path for that resource.
  5. And that is where the problem starts. There are miriads of embedded developer toolkits with all their own choice of a longer or not so long list of hardware boards. ARM CPUs from NXP, Intel, Atmel, TI, Samsung, Motorola, Cirrus Logic, etc., Freescale's Coldfire and PowerPC CPUs, MIPS, Atmel AVR32, National Semiconductor, Hitachi SuperH, and each of these CPUs has their own on chip hardware selection of AI/AO, DIO, timers, ethernet, USB , serial, SPI, I2C, CAN, JTAG, display interfaces, security engines, etc. with very varying register programming interface even for the same functionality, not to forget about external components on the development board that extend the variation even more. Even if NI would license VxWorks or a similar OS for some of these CPU platforms (which they in fact do since the Embedded Toolkit makes use of the RT kernel that comes with the Keil Tools), this does still mean that they do not have board level drivers for all the possible hardware that is out there, not to speak about modifications that you might want to do to the development kit hardware for your own product such as replacing a 2 line display with a 12 line display. Such a change may seem trivial but often it involves not just the change of a variable somewhere but a completely different register set to be initialized and programmed. So I do not think that you can get much more out of the box currently How much LabVIEW embedded really solves a market demand is a different question. It can not possible guarantee you a LabVIEW only experience once you want to change even little things in the hardware design of the developer board that came with your kit, and that is what embedded design is often about. I doubt that many use the original developer board 1:1 in an end user product, so where I see its benefit is in prototyping and possible "one or a few of a kind" test scenarios where you can work with the hardware as it comes in the box, or at most only need to make very little changes to its external peripherial to reduce the work on C level to a minimum. While NI is selling the Embedded Toolkit as a LabVIEW product they make AFAIK no claims that you do not have to go to the C level once you start to make changes to the hardware, and even into the toolchain level if you want to adabt it to your own CPU and/or platform But for those a cRIO system would seem more beneficial to me. It's extra cost is not really an issue if you only are going to build one or a few of those systems.
  6. Look for the property FrameNames of the CaseSel class. This is an array of strings much like the Strings[] property for enums.
  7. Welcome Marius! I remember the days in Austin about 17 years ago . Good luck with your new business!
  8. Think about it! There is no other way to make this feasible possible. The Embedded development system simply converts the VIs to C code and compiles that with the C tool chain for the target system. Just as there are 10 C coders a penny there is one impressive C compiler that works for almost all hardware, namely gcc. NI could spend 100ds of man years and try to write a LabVIEW compiler engine for every possible embedded hardware target out there and they would not get anywhere. By converting everything into C and let gcc (or whatever tool-chain a specific embedded target comes with) deal with it, they can limit the development to a scope that is manageable. And of course the direct communication with hardware resources has to be dealt with in C somehow. There is simply no other way. The LabVIEW system can not possibly abstract the 10000 different hardware targets in such a way that you would not need to do that. On Windows you usually get away without since there are enormous driver suites such as DAQmx and many more that take care of the low level nitty gritty details like interrupts, registers, DMA, etc. On an embedded target that NI has at best had a board in their lab to work with this is not a feasible option. If you need out of the box experience you should not look at embedded hardware. You are anyhow not likely to use the development kit board in an end product so the out of box experience stops there already. A much better solution for out of box experience would be cRIO or maybe sRIO.
  9. A ring control only is a number. Its datatype does not contain any information as to how many elements the Ring Control contains, since that can be changed at runtime through one of its properties, while datatype information has to be defined at compile time. So there is no automatic way to have a case structure adapt to a Ring Control since there is nothing to adapt to, in comparison when you do the same with an enum. The scripting interface of the case structure should have a property that is an array of strings and that should allow to define the number of cases and their values.
  10. Well I do have a complete tag engine inside my RT app. It is basically a loop that communicates with the rest of the RT system through queues. Two queues for input and output to the IO servers, also a queue for application inputs to writeable tags, a buffer for the current value of all tags based on the internal index and another buffer for an extra calculation engine, that can calculate virtual tag values based on a user defined formula that depends on other tags. All these queues and buffers are so called intelligent global variables, and lots of care has been taken to make sure that as much of the parsing, calculations, preparations and everything is done once in the beginning of starting up the engine, so that CPU load is minimized during normal operation. This resulted in an engine that could easily run 200 tags on the lowest end Compact Fieldpoint controller, as long as the LabVIEW VI Server is not started. In addition there is a TCP/IP Server that can retrieve and update any tag in the engine as well as update its runtime attributes such as scaling, limits, alarms etc. It also can update the tag configuration and shutdown or restart the entire engine. The tag configuration itself is done in a LabVIEW application on the host machine. Yes it is a complicated design in some ways and one that has evolved over more than 10 years from a fairly extensive tag based datalogger engine on a desktop machine to a more or less fully fledged SCADA system that can also be deployed to an RT system. The only problem with it is that its components are fairly tightly coupled and I did not always write nice wrappers for the different functionality, so it is quite hard for someone else to go in there and make even smaller changes to it. If you want to go in such a direction I would actually recommend you to look at the Design patterns NI has released through its System Engineering group. They have some very nice tools that do a lot into this same direction. If I would have to start again from scratch I would probably go with that eventhough not all components are available in LabVIEW source code. But at the time they released that stuff, my system was already developed and running for my needs and it has a few technical advantages and also the fact that I can go in and change whatever strikes my fancy for a new project is in extra bonus.
  11. Congratulations Jim! She is very lovely.
  12. I would feel quite unhappy to have a WebService inside my RT control application. It's advantage of easier communication with other parts of the RT program, seem to me outweighed manifold by the extra complexity that creeps into my RT process. IPC through shared variables or TCP/IP communication (my preference) may not seem so elegant at first but it is really not that complicated to implement, especially if you have created a template of this before My RT system design looks a little different in details but quite the same in overall architecture. It originally stems in fact from the need to have an easy communication link to the host for monitoring some or all of the tag based variables. But I have added over time extra means to also communicate with the internal error reporting engine, to start, stop and restart the entire process, to update its tag configuration, and with a simple plugin mechanisme to add extra functionality when needed.
  13. Cross post here. It is considered polite to mention when you crosspost a question.
  14. I haven't used it either but I was under the impression that it was basically a Library of VIs that could be used in LabVIEW. And since the NXT is in principle a 32 bit CPU system what they really were doing is using a LabVIEW embedded development system targeted specifically to this NXT CPU. On top of that the NXT software is an IDE that uses mainly Xnodes or whatever the current terminology is. So what I suspect happening is that the NXT software user configures a software system using something similar to an entirely Express based system, and those Express Nodes call ultimately into the NXT Toolkit VIs and when you run the software, some or all of it gets compiled by the underlaying C cross compiler into a module that can be deployed to the NXT hardware. This is just a guess but it would be a good reason why there is in fact something as LabVIEW for Embedded at all, since this was the test bed par excellence for this technology.
  15. Everything in LabVIEW is ultimately written in C/C++. But yes your diagram is converted directly into machine code to be executed as such. That does not mean that LabVIEW creates the entire machine code itself however. Most LabVIEW nodes for instance are NOT translated into machine code by LabVIEW but are simply some machine code wrapper created by LabVIEW and call ultimately to functions in the LabVIEW runtime kernel. And this runtime kernel, the LabVIEW IDE and the LabVIEW compiler are all written in C/C++. And yes more and more of the IDE itself are nowadays written in LabVIEW. But I agree that NXT and G are by far not the same from a user point of view. The programming paradigm used in the NXT environment is highly sequential and quite different than LabVIEW itself. It is LabVIEW Express on steroids but without the real dataflow elements and loop and other structures of normal LabVIEW programming All that said I wonder how those statistics are generated. Is it user poll, counting publications on a language, websites using the according name, or something else? All of them can represent something but if it is any indication of real world use would be something to be investigated. And such an investigation will always be biased by what the investigators know and find good programming.
  16. Those toolkits mentioned in the earlier post are incidentially LabVIEW libraries. Our toolkit was developed specifically for use with GSM modems build with the Wismo GSM engine used in Maestro and Wavecom modems. But most of it is according to ETSI standards with some Wismo/Wavecom specific syntax. Yes it is not free but it will give you a head start for sure.
  17. Up until and including LabVIEW 7.0 there was no real need to install it. I keep a backup copy of the entire LabVIEW folder for those versions and simply copy it to the actual machine when I want to test something. Of course this only works well for the LabVIEW part itself. If you have toolkits installed in those copies they are usually fine too. Installing them afterwards will usually cause all kinds of problems. DAQ and other device IO drivers can sometimes work if already installed but often cause quite a bit of hassles.
  18. My variant of Dan's VI but this time without any .Net LabVIEW 7.1 Network Path Name.vi Rolf Kalbermatter CIT Engineering Netherlands
  19. I have one colleague that has this on his machine, with the same resolution. The reason this hasn't been fixed until 8.6 (and maybe also in 2009) is most probably that it has not been reported yet and/or is so hard to reproduce. I have the same OS (Windows XP), and use the same LabVIEW versions (actually more, as I have every version since 5.1 installed on my machine) and NEVER saw this behavior. I also never heard of it before from someone else. Rolf Kalbermatter CIT Engineering Netherlands
  20. A pointer is simply a 32bit integer. And in LabVIEW 8.6 and better a pointer sized integer, as far as the Call Library Node is concerned. So the ppidl parameter of SHParseDisplayName() would be a pointer sized integer passed by reference. Rolf Kalbermatter
  21. There is most likely an issue with the way you allocate the pidl. In fact you should not allocate it as SHParseDisplayName() will do that for you and return the pointer. Allocating a pointer and telling LabVIEW that you pass a pointer to an array is, well .... strange and not right at all. And at the end you want to deallocate that pidl with ILFree(). Rolf Kalbermatter
  22. Do you use a (strict) typedef for the tab? And have the disconnect typedef option enabled in the build settings? This used to be a proplem that typedefs getting disconnected lost their default value, and although I had the impression that this had been fixed in the 8.2 or 8.5 version I have in fact never really verified it to be fixed but simply not run into this anymore (but I do heavily intialize my front panels from configuration files anyhow so I might simply not have seen that happening anymore), or this crept back in somehow. Rolf Kalbermatter
  23. Yes the Toolkits in 8.6 should have been like that already, though I think there has been a slip here and there. But the Vision Module is unfortunately not a Toolkit in the sense that it is not maintained by the LabVIEW group but by a seperate development group and they did not seem to be able to get the installer adjusted to be more friendly to earlier versions. Rolf Kalbermatter
  24. There is another problem with the latest LabVIEW versions including the Betas. Installing them will change the actual environment your earlier already installed LabVIEW versions will operate in in many ways. Various supporting components get updated, Toolkits get updated even in earlier versions behind the back (for instance the Vision module changes all VI libraries as far back as LabVIEW 7.1 and suddenly an application built in this earlier LabVIEW version AFTER the installation of LabVIEW 8.6 and the according Vision Toolkit will only run with the Vision runtime 8.6 anymore). This was not really a problem a few years ago but nowadays installing a new version of LabVIEW has many chances to (and usually will) update a lot of things that affect earlier installed LabVIEW versions. So be careful and I definitely won't install a Beta version on my development machine anymore. You can best use a virtual machine for that (or even better since the VmWare tends to be a bit sluggish on my notebook do it on a completely different machine instead, that you can wipe afterward.) For the time being running Betas has been because of that a major hassle so I have done little Beta work with the latest two or three LabVIEW versions. Rolf Kalbermatter
  25. The locking of serial ports happens in the Windows serial port driver and has been so at least since Windows NT and probably even in Win95 and earlier but I'm not sure about that. There is simply no practical use in any but very special situations to allow two different applications access to a serial port at the same time. The already explained sharing race conditions are simply making concurrent use of a resource like the serial port useless. To my knowledge there is no way to tell Windows to allow sharing serial ports between applications. And Windows does not allow applications to access VISA driver ports but it is rather the opposite. VISA makes use of the Windows serial port COMM drivers to access the serial ports. As such VISA is simply another user of the Windows serial ports but VISA is not an independent process but simply a DLL layer that translates the Windows COMM API to its own API and therefore it makes no difference if an application uses VISA or directly the COMM API to access the serial port. If one application has the port open, Windows will disallow any other application to open that same port, independent if that application uses VISA or the Windows COMM API. I'm not really sure what you are trying to do here. As far as application access is concerned it simply makes no sense to try to share a serial port between two or more processes. Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.