-
Posts
3,881 -
Joined
-
Last visited
-
Days Won
265
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Industrial computers are usually more ruggedized and less state of the art. This has several implications: - They can operate in much less friendly environments. Normal desktop computers are not really designed to work in a dusty, oily, or high temperature environment at all. - Not using the latest hardware chips also means that standard drivers are much more likely to work out of the box, without requiring regular updates to various drivers to remedy crashes and other nasties. - If you have a 19" rack anyhow already, they integrate nicely into the setup whereas a standard desktop PC always will stand somwhere aside and use up space, standing in the way, or just laying ad hoc in the rack, possibly falling over or out of the rack. - The extra costs are usually negligible in anything but the most low cost hardware setup in comparison to the rest of the hardware and especially software application.
-
You need to also close the front panel as last action after the VI is finished and make sure you don't hold on to VI references anywhere too. A VI is kept in memory by either an open VI front panel or an open VI reference (that can be managed by the VI itself, either by opening such a refnum explicitedly or by passing refnum ownership to the VI when you launch it with the Run method. Once the last VI has left memory a LabVIEW application will terminate itself. If it doesn't you have either still open a VI somewhere, maybe as hidden front panel (hidden does not mean closed) or are most likely stuck in a synchronous call to external code, such as a DLL call. And here Quit LabVIEW can't help either as Quit LabVIEW has not have any more powers than the Abort button, and that one can't terminate stuck external calls either. Here a task kill is the only solution if you can't somehow persuade the external code to relinquish control back to the calling LabVIEW program.
-
While it seems like a bug, I would like to point out that "Quit LabVIEW" is almost never the right thing to do. If you build an application, this application will normally quit as soon as the last LabVIEW window is closed, either through the button in the right upper corner, or through a property or method node to close that front panel. You also need to make sure that you haven't somewhere hidden front panels but this is simply proper application development. Quit LabVIEW will tear down everything wherever it is and is similar to the abort button in the toolbar. While the abort button has some merits during debugging if you get stuck somewhere (and haven't locked yourself out by a modal dialog) the Quit LabVIEW node is always a runtime operation and designing the application to simply make sure to close all front panels when it should quit is much cleaner and allows the various possible VIs, including deamons running as hidden top level VIs to properly close down and release whatever resources they may have allocated. In the development environment Quit LabVIEW is a terrible thing! It will shut down LabVIEW which is never what I would want during development.
-
1) Is very difficult to maintain as LabVIEW does all the memory management automatically behind the scenes. In theory data is often reused but in practice LabVIEW tends to hold onto data if that wire doesn't get modified. So this would mean that you should definitely overwrite any wire that contains a password as soon as that password is not required anymore. And this would have to be a function that does some inplace operation to set the string contents to all spaces or something by checking the current string length and overwriting its contents and this VI needs to be designed in a way to make sure it operates inplace, so it should have a string input and output terminal and the string should be wired through in the top level diagram, so no terminals inside a case structure or something. Also all the VIs that operate on the password string to for instance calculate a hash, should be designed in the same way, with a password input and output and placing those two controls in the top level diagram and wire it through any case and other structures that may be there. If you use loops, make sure to wire it through a shift register just to be sure. 2) Is impossible in pure LabVIEW as at least the UI to enter the password will always contain that password. The Password display option does hide the password but doesn't change the string content in itself. So if you have access to VI Server AND know the VI name of the password UI AND the name of the control AND VI Server is configured to not disallow access to this VI you can get at the clear string at the moment it gets entered. Seems to me like installing a keylogger is a much more simple and universal way though. The key to solving 2) is in controlling what gets served by VI Server if any. A good approach would be to name all VIs that are supposed to be accessible through VI server with a specific prefix and then setup VI Server to only allow access to VIs with that prefix. Of course this means that your application INI file needs to be secure, but hey if that isn't the case you have many more serious troubles already. NOTE: An interesting tidbit: Try to make a string control to display as password display and then enter a string, select it and do a ctrl-c. Paste it someplace else! No joy, at least in more recent LabVIEW versions (checked it in 2010).
-
I didn't know either until I was going to look for the link to add in my post. My experience is with the older RTAD too and it worked fine.
-
That's only a feasable option if you have full control over the font used. If you plan to distribute the app to other computers, even if you are able to control their configuration completely up to what fonts get installed and what fonts LabVIEW defaults to, it's definitely a lot more hassle than you care to ever have.
-
Passing data between languages
Rolf Kalbermatter replied to Mark Yedinak's topic in Application Design & Architecture
You could also use an array of bytes to encode binary data, although it is a little more verbose than a base64 scheme stored as string. Unfortunately JSON does not seem to provide a standard for encoding numerics in a different than the decimal base. Or since a JSON string is actually 16 bit Unicode, you could encode it as a string of "\uBEAF\u1234\u.....". Still more verbose than base64. -
Well as long as the API call doesn't run into an error, there is in fact not much which can go wrong in an application, depending how you react on the error output of that VI, of course. But likely you haven't really done anything with that. And the changes that GlobalMemoryStatusEx() encounters an error are not that big, respectively if it does you are likely to be in more serious problems already, than that worrying about the GlobalMemoryStatusEx() function return value will make much of a difference.
-
NI often seems to be a bit conservative in their estimated shipping dates. I remember my last cDAQ project where I needed 4 cDAQ-9181 and some modules. The cDAQ-9181 were quoted with 10 days delivery time on the website, when I ordered I got a call that they had production problems with getting some necessary parts for a production run and they estimated a shipping time of about 2 months later. You can understand my surprise when I got the product about 2 weeks later and it fully worked! It's all doable if you have access to the host itself. There is a Real-Time Deployment Library that allows to build in code into the host application, that can update, reset and restart a realtime controller from the host application at runtime. No need to have access to the cRIO over internet or to a LabVIEW development system on the host for upgrading the realtime part.
-
MSDN only says that some functions might do that, not that it is how it should be done. Officially it is an error to do that, yet Microsoft tries often to accommodate even misbehaving applications for compatibility sake. In this case it is of course always safer to call GetLastError() immediately after a function indicated an error (and that function contract says it returns an error status in GetLastError()). Assuming however that GetLastError() returning non-null means the previous API did go wrong is ALWAYS wrong, since most APIs behave like MS meant it to be and do not overwrite the GetLastError() error status when they succeed. Also you have to take LabVIEW into account here. A typical Windows API call is done from a C program and the C programmer controls thread execution completely and knows exactly what was called between any API call and the possible GetLastError() call. First if he doesn't explicitly pass control between the API call and the GetLastError() call to a different thread, (and which C programmer would do such a stupidity, since it is a lot of work and sweat to do multi-threading programming, so it doesn't happen accidentally?), he is sure that both calls are executed in the same thread. And he is exactly sure what other functions he may have called in between in that thread. In LabVIEW the only way to have a similar sure way of knowing is to use subroutines. If two CLNs are set to execute in the UI, they will execute in the same thread but have to share that thread with a lot of other things in LabVIEW, especially UI execution. So there could be 200 other API calls in between done by LabVIEW, that all might have altered GetLastError(). If the CLNs are set to execute in any thread, they can and often will execute in different threads, so GetLastError() won't necessarily see the error code that the API possible has set, since Windows goes through a lot of trouble to make the GetLastError() value only thread global and not process global. In fact in Windows every thread has its own GetLastError() variable. The only way to guarantee that GetLastError() will see the error set by the previous API is to use subroutine, since LabVIEW guarantees, that the entire subroutine call will execute in the same thread and without any other interruption than what the OS might do to preempt a thread for its proper multi-threading operation. Another point is of course that it's a rather brain dead idea to have an API that returns a boolean status to indicate one should call GetLastError() to get more information about the error, since GetLastError() also only returns a DWORD value which would just as well fit into the BOOL returned by the API. It made maybe a little sense in Win 3.1 where BOOL and DWORD were not the same size, but it could have been depreciated for any API that was introduced in Win95 or later. In fact the underlaying NT Kernel all returns HRESULT values, and the kernel32 and user32 API convert them to GetLastError(), since kernel32 and user32 are only really thin wrappers around ntdll.dll and similar other kernel APIs since Windows NT 4.
-
Have you considered to use the cDAQ-9188 instead? True it's about double the price of the cDAQ-9174 but it is shipping. and I would expect the cDAQ-9184 to be about halfway in between these. The cRIO-9075 would be a little cheaper but you don't use DAQmx to access it, but instead place a realtime control programm on its integrated controller and transfer the data from there.
-
One or two nitpicks. GetLastError() queries a thread global variable in kernel32.dll and therefore can and usually only does return the right answer if called in the same thread than the function that caused the error. This might seem like being guaranteed by running the actual function call and GetLastError() in the UI thread, BUT unfortunately that doesn't work like that. LabVIEW is free to schedule other UI calls between the two CLNs such as drawing something in the front panel and many of those functions can also set the last error so the actual error caused by the first CLN might be already overwritten, when this VI tries to read it. The solution is to make the VI subroutine priority, which will tell LabVIEW to execute the entire VI in the same thread without scheduling anything in between but that also requires to make the CLNs run in any thread, as subroutine VIs can not contain code that can run asynchronously such as CLNs set to run in the UI thread. Also GetErrorStatus needs to be changed to subroutine as well as a subroutine can only contain subroutine VIs. Last but not least, GlobalMemoryStatusEx() returns a status of FALSE (0) on error and TRUE (<>0) on success. The GetLastError() and CO should only be executed when GlobalMemoryStatusEx() indicated a failure as Windows functions are not supposed to change the last error value on success at all, and you might retrieve a last error value from a previous API call somewhere in LabVIEW. Get Win32 Error Message.vi GlobalMemoryStatusEx.vi
-
(Un?)intended feature of List Directory Recursive
Rolf Kalbermatter replied to GregSands's topic in OpenG General Discussions
I think it is intended by precedence. In fact I'm totally surprised that List Directory does also filter directories based on the pattern. So what do you get if you want to list *.txt files? No directories at all? I had thought that in older versions of LabVIEW you got all the directories anyhow, but at least back to 7.0 the List Directory already filtered directories as well. But looking at the List Directory Recursive I see that the Directory enumeration is done seperately without any pattern to indeed get around the "limitiation" of the LabVIEW native List Folder node, so it is clearly an intentional decision. I guess you could add an optional boolean that is set to false and in the true case just use the "directory names" output of the first Get Files by pattern function, skipping the "Get dirs" function. But changing the default is not really an option since it could and likely would break quite a few OpenG Tools such as the OpenG Package Builder, Commander and its decendent, the VIPM. -
Relaying on online services is a tricky thing to do in a product. Those services do go away, change their API contract, and what else. Besides there are many people who are behind a firewall and don't have all kinds of access to certain things, that non-corporate users are taking as totally granted nowadays. It may seem a very nice addition to the snippet function, yet the inherent implications are so far reaching, that I do not see NI adding this anytime soon. I think they could make it a little easier along the lines of what the CCT does already but going further is a sure way of creating a maintenance nightmare. And if anything is worse for a product idea than huge costs, it's the prospect of adding maintenance effort in the future rather than eliminating it.
-
Passing data between languages
Rolf Kalbermatter replied to Mark Yedinak's topic in Application Design & Architecture
Actually that remark was specifically targeted at accessing variant data directly from the C side, eg, allow to pass Variants to the LuaVIEW DLL and do the right thing in that DLL. The C interface to Variants, while it exists and gets used in some small manner by DAQmx and some other NI drivers, is undocumented so far. I think the Flattened Format of a Variant would look mostly similar to the Flattened format of other data in LabVIEW with the addition of Variant properties added in. I also believe that the LabVIEW documentation does even document that format to some extend, but it's been a while that I looked at that. In fact the OpenG LVData package implements that approach sort of. But flattening a Variant to pass it to LuaVIEW is while possible, very supoptimal and I don't like the idea to implement it in such a way. -
LuaVIEW and external DLL libraries
Rolf Kalbermatter replied to TimVargo's topic in Calling External Code
Where did you try to put the DLL in the executable and what cpath did you use? It's been a while that I dealt with this issue for an application so I'm not sure about the details at this moment. I do remember something about that it didn't work properly without the compat-5.1 fix that was floating around at that time. And being busy to go through all the tests for the new beta package makes this also a bit harder. But I will look into it as soon as the beta is released. Could you maybe provide a small example project that shows the problem? That would certainly accelerate the issue a lot. -
I mean that a LabVIEW Vision Development Module for mobile platforms never will exist. The targets are simply not suited enough for that and much to troublesome to even consider porting the very non trivial C++ code in the shared libraries of the VDM to it. NI hasn't even ported it back to MacOS, which is the platform where it originally was developed on, before they bought it from Graftek. Your best bet is to interface to whatever API is present in Windows Mobile to do what you need to do and failing that find a specific DLL library that works on that platform. Then create an according Windows stub DLL so you can deploy your VIs, although being able to test it on the host too may be a bit more trouble.
-
And I bet my hat that there never will be!
-
Anybody out there know the status of LuaVIEW?
Rolf Kalbermatter replied to Mark Smith's topic in LabVIEW General
Hello to everybody being interested in LuaVIEW. I’m in the final stages of testing and finalizing a package for the Beta of LuaVIEW 2.0B1. This new release has a number of changes to the previous release but efforts have been made to keep it as compatible as possible to the last LuaVIEW release. Following characteristics are valid for this release: LabVIEW 7.1 or newer Uses Lua 5.1.5 as Lua engine (LuaVIEW 1.2 used Lua 5.0.3) The core library has been changed into a DLL/shared library Supports LabVIEW for Linux x86, Windows x86 and x64 Distributed as OpenG package, can be installed with the OpenG package installer or with VIPM This Beta is time limited and will stop working after the end of 2012. If no serious problems are found during the Beta test, the 2.0 release version is expected to be released around end of August. The release version will include runtime support for LabVIEW for Mac OS X and NI realtime systems (cRIO and Pharlap ETS). It will also include a binary module to access NI-VISA directly from within a Lua script for at least Windows. To receive a download link to the Beta package please send me a pm and specify which LabVIEW version and OS you plan to use with this Beta. Sincerely -
Switzerland uses polarised plugs and there is a convention that the live terminal should be connected to the right pole, if the middle earth pole is at the lower end. However it is specifically forbidden for a device to rely on this fact, so no connection of the left pole to any touchable metal part at all!! Basically the only thing you can say for sure is that three poled connectors have a known earth connection and any electrical conductive parts on a device that can get in contact with a human or animal should be connected to that earth. Oherwise you have only two poles and the device needs to be double isolated for safety. Such double isolated devices should have the according sign somewhere on it's casing, a rectangle inside another rectangle, symbolising the double isolation. The French and Belgium installations seem not to have any preference for which side to connect the live pin, despite the fact that the socket is actually polarised due to the unsymetrical earth pin. Basically for any appliance that can handle unpolarised connection, such as used in the German "Schuko" system, it should not matter at all, if the outlet is polarised or not. The opposite is obviously not true, but I would think that any appliance expecting polarized connection, would be a total pita to sell outside of a few very limited markets.
-
And in practice LabVIEW has already the seperation of runtime system and IDE, or otherwise remote target deployment and debugging both with RT and FPGA targets as well as from desktop LabVIEW to desktop LabVIEW would not be possible. But it's not the solution for allowing truely abortable CLNs. You would have to separate the actual CLN context in order to be able to recover from an aborted CLN and that would mean an isolation of parts of the runtime system inside the runtime system. A pain in the ass to do, and a total performance killer as you have already aluded to about string and array parameters that would trigger copies.
-
Well, yes they allow to call another function or more precisely three. One when the CLN is initialized, one when the VI containing the CLN is aborted and one when the CLN is unintialized. Each takes a context parameter that can also be added to the actual function call itself. So in the OnReserve function you create the context with whatever info your function might require, in the function call itself you setup some bookkeeping of some sort to be able to signal the thread to stop, and in the OnAbort you abort the thread, preferably not by killing the process but by correctly signalling whatever is waiting on some external event. OnUnreserve you deallocate and clean up whatever has accumulated during the OnReserve, OnAbort and function calls. Of course if your DLL is buggy and just hangs somewhere, this signaling won't help, but honestly once you are in there, nothing will really help safe from killing the process. LabVIEW can not even start to guess how to recover properly from such a situation since it has absolutely no control of the stack during the function call. Any attempt to resume after a forced abort is doomed to cause many nasty sideeffects, if it doesn't immediately go into pointer nirvana. And no a DLL interface doesn't specify a certain Exception handling interface at all, and Exception handling very much depends on the used compiler, since each tends to have it's own patent encumbered exception handling mechanisme. The OnAbort function is responsible to signal the waiting thread and make sure it cleanly exists back to the LabVIEW diagram with properly cleaned up stack and all.
-
Actually since about LabVIEW 8.2 they sort of are through the badly named Callback functions. LabVIEW 7 didn't have that but CINS had a CINAbort function that could do that, if properly implemented.