Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. Unfortunately that is a functionality of the Windows shell File Dialog. On a successful file selection it seems to set the current directory for the app to the directory that selection was made in. Why in the world the MS programmers thought this to be a good idea, I do not know at all. There seems no way for an application to tell the dialog not to do that. The only LabVIEW workaround would be to keep resetting the current directory to the previous value after each dialog! possibly breaking something that was the cause for MS to add that functionality.
  2. I agree but I was not aware that the FTP VIs would use that range. After all they are from NI and I would hope they do not clash there. ( I know they clash elsewhere!) But since the VIs are in source code and without passwords, the best person to really go after that problem is the OP himself. A bit of single stepping and debugging will surely show the offending operation.
  3. Might the problem be maybe more on the sender side? I ask this because 200ms sounds a lot like the default TCP/IP Naggle algorithmus delay. But that is applied to the sender side to avoid sending lots and lots of small TCP/IP frames over the network. So reading 4 bytes and then the data might be no problem at all but trying to do the same on the sender side might be. It's also my experience that on the reading side you can usually chuck up a package in as many reads as you want (of course performance might get smaller if you do a seperate TCP/IP read for every byte, but that is besides the point). On the other hand it is usually a good idea to combine as many data as possible in one string and send it with a single TCP Write. That is at least how I usually do TCP/IP communication. Another option I have at times used is to enable the TCP_NODELAY socket option, but I have to admit I never did that on an embedded controller so far. Not even sure how to do that on a VxWorks controller as its API is not really standard.
  4. You are right, but 1xxx errors are of the LabVIEW environment type errors (think VI Server, scripting etc.) so I really wonder how that could get into FTP VIs where simple File IO is handled. I haven't looked at the FTP Vis in ages as I use my own library but maybe they use VI Server for some reentrant calls or something in there??? With FTP this could mean that the login was not successful.
  5. I've seen it too many times but it seems to be mainly limited to using the Browse dialog. Normal LabVIEW File I/O primitives usually don't seem to cause that problem. That made me believe that it is something the Browse Dialog is causing and this dialog is largely part of the OS itself. Maybe it's the way LabVIEW is using that dialog. An interesting experiment I wanted to do in ages is to use the LabVIEW file dialog instead and see if the problem persists. Problem is that I have not found a reliable way to reproduce that problem.
  6. Ohh, ohh, you got that mixed up very much mate . LabVIEW 1 was Mac only and so was LabVIEW 2. It did not look like LabWindows at all but like Macintosh. LabWindows was for DOS and its programming was in Basic and a subset of C and the graphical UI was far from what a Macintosh could do although much better than most of what could be done on DOS otherwise. The first LabVIEW version for Windows was 2.5 but it was really a preview release and more like a glorified Alpha. It's stability was ... well, nothing to write home about, but then it was also Windows 3.1 and LabVIEW was probably one of the few applications out there exercising the Windows 3.1 kernel to the limits of what MS never had imagined. The first official LabVIEW version for Windows was 3.0 followed by numerous bug fix releases and with 3.1, if memory serves right, adding SunOS support, which many years later was renamed Solaris 1 by Sun. Somewhere around 3.1 the Macintosh version was also getting back in sync with the multiplatform release, having been sold until then as 2.2.1 version in its old non-multiplattform version, with its Mac only resource format files. The UI has had quite a few changes IMO, with new objects being added regularly. Yes the basic concept hasn't much changed and I wished they had overhauled things like the custom control editor to allow a much better customization of controls. The current editor is from the beginnings of LabVIEW and simply not very intuitive in many ways. Also some of the newer controls seem not to have gotten the internal object message support to be customizable at all in that editor. If I knew this is because they support internally a different object message interface for a yet to come new custom control editor then I could more easily live with that, but I have my doubts.
  7. No I think integrating LabVIEW modules as DLL into a LabVIEW application is a fairly roundabout way of doing business. It is possible and even works most of the times, but there are gotchas. 1) The DLL interface really limits the types of parameters you can pass to the module and retrieve back. 2) There is a LabVIEW => C => LabVIEW parameter translation for all non-flat parameters (array and strings) unless you use LabVIEW native datatypes (handles) AND the DLL is in the same version as the caller => Slowing down the call. 3) When the versions don't match there is also a proxy marshalling of all data paramaters necessary, much like what ActiveX does for out of process servers (but it is not the same marshalling mechanism as in ActiveX) since the DLL and the caller really execute in two different processes => Slowing down the call. 4) The DLL can not for the sake of it communicate through other means with the caller but through its parameter interfaces or platform system resources (events, etc). The notifiers, LabVIEW events, semaphores, etc. are (maybe shared and meaningful in the case of same DLL and caller version) but certainly completely useless if the DLL is in a different LabVIEW version than the caller. There are probably a few more gotchas that I haven't thought of at the moment.
  8. You should also give more informations as to the error number and text you receive and also a bit more background in terms of LabVIEW version etc. Also post the test VI you have been executing when you get your errors. Your description of what you are doing is rather compressed and not very clear.
  9. Or they are simply to smart. You know the pink monkey experiment, don't you! Belongng to a 2% minority is like being a pink monkey especially in a society where everybody is on animal level.
  10. Actually that is not true either. Generally, compiled VIs in a x.x.1 version can be loaded into x.x.0 runtime and vice-versa and executed. It could in some corner cases give strange (visual) effects or calculation artefacts but in general it works. But before LabVIEW 8.5 if you loaded a VI that was not EXACTLY the same version into the development system, it always got recompiled automatically. That is true if you try to load the VI through VI server. It is not true if you compile those VIs into a DLL and call that DLL through the Call Library Node. If the LabVIEW version the DLL is created with does match the caller version, the VIs in that DLL are loaded into the current LabVIEW system and executed there. If the versions do not match (not sure about the bug fix version difference here), the DLL is loaded through the according runtime system and run in that way.
  11. I usually go the other side around, doing the work in the older version and test afterwards that it still works in the newer one. Of course if you have to support runtime rather than the development system, you won't be able to avoid building a runtime distribution for each of these versions. But then it is the question why won't they be able to install a newer runtime version? After all you can install several runtime versions alongside each other with no problem. In fact if you install driver software from NI, such as NI-VISA, DAQmx etc. you already have at least two to three different runtime versions installed since varous tools and utilities in there were developed in various LabVIEW versions.
  12. But if you want to use those sensors on humans you will not want to make the isolation yourself. There is basically no way you can get the necessary approvals yourself, that you can not get into trouble if a patient suddenly feels sick after having been subjected to a physical contact with the sensors isolated by your circuitry. And getting sued can be very expensive.
  13. Actually you can verify this further by using the path to string array function. You will see that the first element will be \\server\share as unintuitive that may seem. A quick test in LabVIEW 6 showed that the strip path for such a path retuns Not A Path and an empty name, but doesn't crash. So it seems someone has worked on that functionality to be a bit more consistent but might have messed up something, or used a technique that got troubles with later LabVIEW memory optimizations.
  14. Actually I would conquer that this last one is in principle an invalid operation. A path should most probably point to a valid file system location and the server name alone is not such a thing. You can not, even on Windows, take a server name only and use any file IO operations on it. You have for instance to use special network APIs to enumerate the contents of a domain/workgroup or a server. LabVIEW never has so far taken the file IO functions to support directly network resources, most probably since that is quite a flaky thing to do under the various versions of Windows. I once wrote a "List Network Resources" CIN long ago, that was meant to provide the same functionality for network domains and servers as the native List Directory does for file systems and had all kinds of nasty problems, one of them being that the LabVIEW path type isn't exactly suited to represent such a thing in an easy way. Of course the Strip Path should definitly not crash on such an operation, but for the rest I would think it operates properly by not returning a valid stripped path for that resource.
  15. And that is where the problem starts. There are miriads of embedded developer toolkits with all their own choice of a longer or not so long list of hardware boards. ARM CPUs from NXP, Intel, Atmel, TI, Samsung, Motorola, Cirrus Logic, etc., Freescale's Coldfire and PowerPC CPUs, MIPS, Atmel AVR32, National Semiconductor, Hitachi SuperH, and each of these CPUs has their own on chip hardware selection of AI/AO, DIO, timers, ethernet, USB , serial, SPI, I2C, CAN, JTAG, display interfaces, security engines, etc. with very varying register programming interface even for the same functionality, not to forget about external components on the development board that extend the variation even more. Even if NI would license VxWorks or a similar OS for some of these CPU platforms (which they in fact do since the Embedded Toolkit makes use of the RT kernel that comes with the Keil Tools), this does still mean that they do not have board level drivers for all the possible hardware that is out there, not to speak about modifications that you might want to do to the development kit hardware for your own product such as replacing a 2 line display with a 12 line display. Such a change may seem trivial but often it involves not just the change of a variable somewhere but a completely different register set to be initialized and programmed. So I do not think that you can get much more out of the box currently How much LabVIEW embedded really solves a market demand is a different question. It can not possible guarantee you a LabVIEW only experience once you want to change even little things in the hardware design of the developer board that came with your kit, and that is what embedded design is often about. I doubt that many use the original developer board 1:1 in an end user product, so where I see its benefit is in prototyping and possible "one or a few of a kind" test scenarios where you can work with the hardware as it comes in the box, or at most only need to make very little changes to its external peripherial to reduce the work on C level to a minimum. While NI is selling the Embedded Toolkit as a LabVIEW product they make AFAIK no claims that you do not have to go to the C level once you start to make changes to the hardware, and even into the toolchain level if you want to adabt it to your own CPU and/or platform But for those a cRIO system would seem more beneficial to me. It's extra cost is not really an issue if you only are going to build one or a few of those systems.
  16. Look for the property FrameNames of the CaseSel class. This is an array of strings much like the Strings[] property for enums.
  17. Welcome Marius! I remember the days in Austin about 17 years ago . Good luck with your new business!
  18. Think about it! There is no other way to make this feasible possible. The Embedded development system simply converts the VIs to C code and compiles that with the C tool chain for the target system. Just as there are 10 C coders a penny there is one impressive C compiler that works for almost all hardware, namely gcc. NI could spend 100ds of man years and try to write a LabVIEW compiler engine for every possible embedded hardware target out there and they would not get anywhere. By converting everything into C and let gcc (or whatever tool-chain a specific embedded target comes with) deal with it, they can limit the development to a scope that is manageable. And of course the direct communication with hardware resources has to be dealt with in C somehow. There is simply no other way. The LabVIEW system can not possibly abstract the 10000 different hardware targets in such a way that you would not need to do that. On Windows you usually get away without since there are enormous driver suites such as DAQmx and many more that take care of the low level nitty gritty details like interrupts, registers, DMA, etc. On an embedded target that NI has at best had a board in their lab to work with this is not a feasible option. If you need out of the box experience you should not look at embedded hardware. You are anyhow not likely to use the development kit board in an end product so the out of box experience stops there already. A much better solution for out of box experience would be cRIO or maybe sRIO.
  19. A ring control only is a number. Its datatype does not contain any information as to how many elements the Ring Control contains, since that can be changed at runtime through one of its properties, while datatype information has to be defined at compile time. So there is no automatic way to have a case structure adapt to a Ring Control since there is nothing to adapt to, in comparison when you do the same with an enum. The scripting interface of the case structure should have a property that is an array of strings and that should allow to define the number of cases and their values.
  20. Well I do have a complete tag engine inside my RT app. It is basically a loop that communicates with the rest of the RT system through queues. Two queues for input and output to the IO servers, also a queue for application inputs to writeable tags, a buffer for the current value of all tags based on the internal index and another buffer for an extra calculation engine, that can calculate virtual tag values based on a user defined formula that depends on other tags. All these queues and buffers are so called intelligent global variables, and lots of care has been taken to make sure that as much of the parsing, calculations, preparations and everything is done once in the beginning of starting up the engine, so that CPU load is minimized during normal operation. This resulted in an engine that could easily run 200 tags on the lowest end Compact Fieldpoint controller, as long as the LabVIEW VI Server is not started. In addition there is a TCP/IP Server that can retrieve and update any tag in the engine as well as update its runtime attributes such as scaling, limits, alarms etc. It also can update the tag configuration and shutdown or restart the entire engine. The tag configuration itself is done in a LabVIEW application on the host machine. Yes it is a complicated design in some ways and one that has evolved over more than 10 years from a fairly extensive tag based datalogger engine on a desktop machine to a more or less fully fledged SCADA system that can also be deployed to an RT system. The only problem with it is that its components are fairly tightly coupled and I did not always write nice wrappers for the different functionality, so it is quite hard for someone else to go in there and make even smaller changes to it. If you want to go in such a direction I would actually recommend you to look at the Design patterns NI has released through its System Engineering group. They have some very nice tools that do a lot into this same direction. If I would have to start again from scratch I would probably go with that eventhough not all components are available in LabVIEW source code. But at the time they released that stuff, my system was already developed and running for my needs and it has a few technical advantages and also the fact that I can go in and change whatever strikes my fancy for a new project is in extra bonus.
  21. Congratulations Jim! She is very lovely.
  22. I would feel quite unhappy to have a WebService inside my RT control application. It's advantage of easier communication with other parts of the RT program, seem to me outweighed manifold by the extra complexity that creeps into my RT process. IPC through shared variables or TCP/IP communication (my preference) may not seem so elegant at first but it is really not that complicated to implement, especially if you have created a template of this before My RT system design looks a little different in details but quite the same in overall architecture. It originally stems in fact from the need to have an easy communication link to the host for monitoring some or all of the tag based variables. But I have added over time extra means to also communicate with the internal error reporting engine, to start, stop and restart the entire process, to update its tag configuration, and with a simple plugin mechanisme to add extra functionality when needed.
  23. Cross post here. It is considered polite to mention when you crosspost a question.
  24. I haven't used it either but I was under the impression that it was basically a Library of VIs that could be used in LabVIEW. And since the NXT is in principle a 32 bit CPU system what they really were doing is using a LabVIEW embedded development system targeted specifically to this NXT CPU. On top of that the NXT software is an IDE that uses mainly Xnodes or whatever the current terminology is. So what I suspect happening is that the NXT software user configures a software system using something similar to an entirely Express based system, and those Express Nodes call ultimately into the NXT Toolkit VIs and when you run the software, some or all of it gets compiled by the underlaying C cross compiler into a module that can be deployed to the NXT hardware. This is just a guess but it would be a good reason why there is in fact something as LabVIEW for Embedded at all, since this was the test bed par excellence for this technology.
  25. Everything in LabVIEW is ultimately written in C/C++. But yes your diagram is converted directly into machine code to be executed as such. That does not mean that LabVIEW creates the entire machine code itself however. Most LabVIEW nodes for instance are NOT translated into machine code by LabVIEW but are simply some machine code wrapper created by LabVIEW and call ultimately to functions in the LabVIEW runtime kernel. And this runtime kernel, the LabVIEW IDE and the LabVIEW compiler are all written in C/C++. And yes more and more of the IDE itself are nowadays written in LabVIEW. But I agree that NXT and G are by far not the same from a user point of view. The programming paradigm used in the NXT environment is highly sequential and quite different than LabVIEW itself. It is LabVIEW Express on steroids but without the real dataflow elements and loop and other structures of normal LabVIEW programming All that said I wonder how those statistics are generated. Is it user poll, counting publications on a language, websites using the according name, or something else? All of them can represent something but if it is any indication of real world use would be something to be investigated. And such an investigation will always be biased by what the investigators know and find good programming.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.