-
Posts
3,871 -
Joined
-
Last visited
-
Days Won
262
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Think about it! There is no other way to make this feasible possible. The Embedded development system simply converts the VIs to C code and compiles that with the C tool chain for the target system. Just as there are 10 C coders a penny there is one impressive C compiler that works for almost all hardware, namely gcc. NI could spend 100ds of man years and try to write a LabVIEW compiler engine for every possible embedded hardware target out there and they would not get anywhere. By converting everything into C and let gcc (or whatever tool-chain a specific embedded target comes with) deal with it, they can limit the development to a scope that is manageable. And of course the direct communication with hardware resources has to be dealt with in C somehow. There is simply no other way. The LabVIEW system can not possibly abstract the 10000 different hardware targets in such a way that you would not need to do that. On Windows you usually get away without since there are enormous driver suites such as DAQmx and many more that take care of the low level nitty gritty details like interrupts, registers, DMA, etc. On an embedded target that NI has at best had a board in their lab to work with this is not a feasible option. If you need out of the box experience you should not look at embedded hardware. You are anyhow not likely to use the development kit board in an end product so the out of box experience stops there already. A much better solution for out of box experience would be cRIO or maybe sRIO.
-
A ring control only is a number. Its datatype does not contain any information as to how many elements the Ring Control contains, since that can be changed at runtime through one of its properties, while datatype information has to be defined at compile time. So there is no automatic way to have a case structure adapt to a Ring Control since there is nothing to adapt to, in comparison when you do the same with an enum. The scripting interface of the case structure should have a property that is an array of strings and that should allow to define the number of cases and their values.
-
Well I do have a complete tag engine inside my RT app. It is basically a loop that communicates with the rest of the RT system through queues. Two queues for input and output to the IO servers, also a queue for application inputs to writeable tags, a buffer for the current value of all tags based on the internal index and another buffer for an extra calculation engine, that can calculate virtual tag values based on a user defined formula that depends on other tags. All these queues and buffers are so called intelligent global variables, and lots of care has been taken to make sure that as much of the parsing, calculations, preparations and everything is done once in the beginning of starting up the engine, so that CPU load is minimized during normal operation. This resulted in an engine that could easily run 200 tags on the lowest end Compact Fieldpoint controller, as long as the LabVIEW VI Server is not started. In addition there is a TCP/IP Server that can retrieve and update any tag in the engine as well as update its runtime attributes such as scaling, limits, alarms etc. It also can update the tag configuration and shutdown or restart the entire engine. The tag configuration itself is done in a LabVIEW application on the host machine. Yes it is a complicated design in some ways and one that has evolved over more than 10 years from a fairly extensive tag based datalogger engine on a desktop machine to a more or less fully fledged SCADA system that can also be deployed to an RT system. The only problem with it is that its components are fairly tightly coupled and I did not always write nice wrappers for the different functionality, so it is quite hard for someone else to go in there and make even smaller changes to it. If you want to go in such a direction I would actually recommend you to look at the Design patterns NI has released through its System Engineering group. They have some very nice tools that do a lot into this same direction. If I would have to start again from scratch I would probably go with that eventhough not all components are available in LabVIEW source code. But at the time they released that stuff, my system was already developed and running for my needs and it has a few technical advantages and also the fact that I can go in and change whatever strikes my fancy for a new project is in extra bonus.
-
Congratulations Jim! She is very lovely.
-
I would feel quite unhappy to have a WebService inside my RT control application. It's advantage of easier communication with other parts of the RT program, seem to me outweighed manifold by the extra complexity that creeps into my RT process. IPC through shared variables or TCP/IP communication (my preference) may not seem so elegant at first but it is really not that complicated to implement, especially if you have created a template of this before My RT system design looks a little different in details but quite the same in overall architecture. It originally stems in fact from the need to have an easy communication link to the host for monitoring some or all of the tag based variables. But I have added over time extra means to also communicate with the internal error reporting engine, to start, stop and restart the entire process, to update its tag configuration, and with a simple plugin mechanisme to add extra functionality when needed.
-
Cross post here. It is considered polite to mention when you crosspost a question.
-
I haven't used it either but I was under the impression that it was basically a Library of VIs that could be used in LabVIEW. And since the NXT is in principle a 32 bit CPU system what they really were doing is using a LabVIEW embedded development system targeted specifically to this NXT CPU. On top of that the NXT software is an IDE that uses mainly Xnodes or whatever the current terminology is. So what I suspect happening is that the NXT software user configures a software system using something similar to an entirely Express based system, and those Express Nodes call ultimately into the NXT Toolkit VIs and when you run the software, some or all of it gets compiled by the underlaying C cross compiler into a module that can be deployed to the NXT hardware. This is just a guess but it would be a good reason why there is in fact something as LabVIEW for Embedded at all, since this was the test bed par excellence for this technology.
-
Everything in LabVIEW is ultimately written in C/C++. But yes your diagram is converted directly into machine code to be executed as such. That does not mean that LabVIEW creates the entire machine code itself however. Most LabVIEW nodes for instance are NOT translated into machine code by LabVIEW but are simply some machine code wrapper created by LabVIEW and call ultimately to functions in the LabVIEW runtime kernel. And this runtime kernel, the LabVIEW IDE and the LabVIEW compiler are all written in C/C++. And yes more and more of the IDE itself are nowadays written in LabVIEW. But I agree that NXT and G are by far not the same from a user point of view. The programming paradigm used in the NXT environment is highly sequential and quite different than LabVIEW itself. It is LabVIEW Express on steroids but without the real dataflow elements and loop and other structures of normal LabVIEW programming All that said I wonder how those statistics are generated. Is it user poll, counting publications on a language, websites using the according name, or something else? All of them can represent something but if it is any indication of real world use would be something to be investigated. And such an investigation will always be biased by what the investigators know and find good programming.
-
Sending sms to mobile phone using GSM modem
Rolf Kalbermatter replied to Fellie's topic in LabVIEW General
Those toolkits mentioned in the earlier post are incidentially LabVIEW libraries. Our toolkit was developed specifically for use with GSM modems build with the Wismo GSM engine used in Maestro and Wavecom modems. But most of it is according to ETSI standards with some Wismo/Wavecom specific syntax. Yes it is not free but it will give you a head start for sure. -
Getting a new PC. How to reload 2009 and old versions?
Rolf Kalbermatter replied to george seifert's topic in LabVIEW General
Up until and including LabVIEW 7.0 there was no real need to install it. I keep a backup copy of the entire LabVIEW folder for those versions and simply copy it to the actual machine when I want to test something. Of course this only works well for the LabVIEW part itself. If you have toolkits installed in those copies they are usually fine too. Installing them afterwards will usually cause all kinds of problems. DAQ and other device IO drivers can sometimes work if already installed but often cause quite a bit of hassles. -
My variant of Dan's VI but this time without any .Net LabVIEW 7.1 Network Path Name.vi Rolf Kalbermatter CIT Engineering Netherlands
-
I have one colleague that has this on his machine, with the same resolution. The reason this hasn't been fixed until 8.6 (and maybe also in 2009) is most probably that it has not been reported yet and/or is so hard to reproduce. I have the same OS (Windows XP), and use the same LabVIEW versions (actually more, as I have every version since 5.1 installed on my machine) and NEVER saw this behavior. I also never heard of it before from someone else. Rolf Kalbermatter CIT Engineering Netherlands
-
A pointer is simply a 32bit integer. And in LabVIEW 8.6 and better a pointer sized integer, as far as the Call Library Node is concerned. So the ppidl parameter of SHParseDisplayName() would be a pointer sized integer passed by reference. Rolf Kalbermatter
-
There is most likely an issue with the way you allocate the pidl. In fact you should not allocate it as SHParseDisplayName() will do that for you and return the pointer. Allocating a pointer and telling LabVIEW that you pass a pointer to an array is, well .... strange and not right at all. And at the end you want to deallocate that pidl with ILFree(). Rolf Kalbermatter
-
2009, Make current values default... Again....
Rolf Kalbermatter replied to lecroy's topic in Development Environment (IDE)
Do you use a (strict) typedef for the tab? And have the disconnect typedef option enabled in the build settings? This used to be a proplem that typedefs getting disconnected lost their default value, and although I had the impression that this had been fixed in the 8.2 or 8.5 version I have in fact never really verified it to be fixed but simply not run into this anymore (but I do heavily intialize my front panels from configuration files anyhow so I might simply not have seen that happening anymore), or this crept back in somehow. Rolf Kalbermatter -
Yes the Toolkits in 8.6 should have been like that already, though I think there has been a slip here and there. But the Vision Module is unfortunately not a Toolkit in the sense that it is not maintained by the LabVIEW group but by a seperate development group and they did not seem to be able to get the installer adjusted to be more friendly to earlier versions. Rolf Kalbermatter
-
There is another problem with the latest LabVIEW versions including the Betas. Installing them will change the actual environment your earlier already installed LabVIEW versions will operate in in many ways. Various supporting components get updated, Toolkits get updated even in earlier versions behind the back (for instance the Vision module changes all VI libraries as far back as LabVIEW 7.1 and suddenly an application built in this earlier LabVIEW version AFTER the installation of LabVIEW 8.6 and the according Vision Toolkit will only run with the Vision runtime 8.6 anymore). This was not really a problem a few years ago but nowadays installing a new version of LabVIEW has many chances to (and usually will) update a lot of things that affect earlier installed LabVIEW versions. So be careful and I definitely won't install a Beta version on my development machine anymore. You can best use a virtual machine for that (or even better since the VmWare tends to be a bit sluggish on my notebook do it on a completely different machine instead, that you can wipe afterward.) For the time being running Betas has been because of that a major hassle so I have done little Beta work with the latest two or three LabVIEW versions. Rolf Kalbermatter
-
The locking of serial ports happens in the Windows serial port driver and has been so at least since Windows NT and probably even in Win95 and earlier but I'm not sure about that. There is simply no practical use in any but very special situations to allow two different applications access to a serial port at the same time. The already explained sharing race conditions are simply making concurrent use of a resource like the serial port useless. To my knowledge there is no way to tell Windows to allow sharing serial ports between applications. And Windows does not allow applications to access VISA driver ports but it is rather the opposite. VISA makes use of the Windows serial port COMM drivers to access the serial ports. As such VISA is simply another user of the Windows serial ports but VISA is not an independent process but simply a DLL layer that translates the Windows COMM API to its own API and therefore it makes no difference if an application uses VISA or directly the COMM API to access the serial port. If one application has the port open, Windows will disallow any other application to open that same port, independent if that application uses VISA or the Windows COMM API. I'm not really sure what you are trying to do here. As far as application access is concerned it simply makes no sense to try to share a serial port between two or more processes. Rolf Kalbermatter
-
2009, Make current values default... Again....
Rolf Kalbermatter replied to lecroy's topic in Development Environment (IDE)
I have the same stance and in fact I wouldn't even notice that bug . I always intiliase tab controls explicitedly in my constructor state of my state machines. Don't ask me why, it just feels better. Rolf Kalbermatter -
diffrent of "property->Value" and "Local Variable"
Rolf Kalbermatter replied to MViControl's topic in LabVIEW General
The thread swapping is only one aspect of difference. A Value property always operates synchronously. That means it will wait until the new data has been drawn in the UI (quite logical since it already swapped to the UI thread and incurred that overhead anyhow, just do the drawing as well). A local variable will behave mostly like a terminal. Yes there is an additional copy involved but the worst thing about locals (and globals and property values) is that you can easily create race conditions and those funnies can be very hard to debug, since the fact of debugging can chance the execution order and create or avoid such race conditions randomly. And as long as the according control is set to not operate synchronously, updating a local or terminal will NOT wait until the new value is updated on screen. It will simply drop the new value into a buffer of that control and go on. The UI thread periodically will check for such controls that need an update and redraw them. This has the advantage that a control that gets updated in a loop will not slow down that loop very much. You can make a small experiment: Create a loop that executes 10000 times and just creates some random value. In one case wire that random number inside the loop to a terminal or local not set to be synchronous (the default). In the other case wire it to a terminal or control set to be synchronous or a value property. You should see the asynchronous terminal executing fastest directly followed by the asynchronous local. Then far slower the synchronous terminal and local (really little difference here) and last but not least the value property which will be the slowest by far. The reason is that updating a control takes a long time in comparison to computing numbers and shuffling data around in memory. So in asynchronous mode the loop iterates many 1000 times per second, while only maybe 60 of those values are really shown on the frontpanel and if someone now says: what? LabVIEW does not show me all values! Then think again. You can't even see those 60 values as they change way to fast for the human eye so why try to update more than that? In synchronous mode, and the value property is inherently synchronous with no way to avoid that, the loop will only iterate in the 100ds per second as for each new value, the control has to be redrawn completely. Rolf Kalbermatter -
Because this Browse for Folder is really useless on older Windows platforms. Before XP it is simply a non-resizable mini dialog that is no use at all for serious Browse for Folder work. Also because anything like this they do on Windows they have to port somehow to Linux and Mac too. Mac probably has such a thing but I doubt you find such a native dialog specifically for Folders on Linux. And why it doesn't work with anything but the desktop for the PIDL is logical. A PIDL is not an enum but a binary data structure with private content that describes a path in terms of the shell name space. If you select desktop this is NULL so BrowseForFolder interprets this as a NULL PIDL which happens to be the Desktop too. But in all other cases it interprets the value as a pointer (oops pointer to address 1 is a surprise that it didn't immediately crash). To create PIDLs you have specific shell32 APIs that take a path and return the according PIDL to it such as SHParseDisplayName(). Or you could use SHGetSpecialFolderLocation() which probably could translate your enum directly into a pidl, though I didn't check if the CSIDL enum of that function matches with your enum. Rolf Kalbermatter
-
ILFree() was an undocumented shell32 function since Win95 and Microsoft documented it after the big monopilist case they got into, which was I believe after Win 2000 was released. Those undocumented functions were exported by shell32 but not by name but only by ordinal instead. The ordinal number for ILFree is 155. And now the nice part: The Call Library node supports importing by ordinal. Just enter the number into the library name field. PS: ILFree is indeed just a wrapper around CoTaskMemFree even on older platforms. Rolf Kalbermatter
-
We probably did . Well there is another solution although it is in fact in a sense a global too. And your original solution with the IMAQ ref already goes a long way into that direction. Make that VI an Action Engine or as I call them an intelligent Global with a method selector. Calling the Init method from the pre-sequence where it does the opening of the resources (and that deamon stays in memory polling the same or another Intelligent Global for the stop event). Then the Execute method or methods do whatever needs to be done on those resources and the Close method is called from the post-sequence step and closes all resources as well as sets the quit event for the deamon. Rolf Kalbermatter
-
Well but that is about the VI refnum itself. However this does not solve the issue with other refnums opened inside that VI once that VI goes idle (even when the autodispose is set to ture when opening the VI ref and that VI does not close its refnum itself) and in TestStand this is the most simple way to open common resources, by opening them in a pre-sequence step. However that step simply runs and then stops. The only way to circumvent that is to keep the VI you are launching in the pre-sequence step running as a deamon and then shut it down in the post-sequence step. Rolf Kalbermatter
-
Naming Conventions and Project Organization
Rolf Kalbermatter replied to Daryl's topic in Application Design & Architecture
Aaah well that clears up a lot. It sounded a bit like you wanted to try to tell everybody on LAVA how to do things . And that is something doomed to fail, because after all we are LabVIEW Advanced Virtual Architects and no advanced guy/gal accepts someone else trying to tell him/her how to do things the right way. In your situation even if you are in a position where you have the powers to force people into doing something, the best thing would be education. Try to establish some training sessions. Nothing fancy or to formal but something like half an hour each week where you get all the people together and then show them some means to get well organized. Get it interactive, and let them workout the solutions you want them to learn. Make from a group of individual fighters a real team. And most important don't expect this to change everything in two weeks. Be persistent but without being notorious. It will not change after the first time, and only slightly after the 5th. But there will be change over time and as people see that it is more effective to do the things in a somewhat more organized way it will without doubt have an impact. Rolf Kalbermatter