-
Posts
3,871 -
Joined
-
Last visited
-
Days Won
262
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Just a short heads up. There is going to be a new release of Lua for LabVIEW 2.0 in the next few days. The initial release will be for LabVIEW for Windows 32 Bit and 64 Bit only. Linux, MacOS X and NI realtime target support will follow begin of next year. Currently I'm testing the software and cleaning up the documentation for it. Keep an eye on http://www.luaforlabview.com for more news about this.
-
.Net is in some ways better than ActiveX in the areas Shaun mentions. ActiveX is an extension of OLE and COM which have their roots in old Windows 3.x times. Back then preemptive multitasking was something reserved for high end unix workstations and PCs had to live with cooperative multitasking. So many things in Windows 3.1 and OLE and COM assumed single threading environment or at best what Microsoft called apartment threading. This last one means that an application can have multiple threads but any particular object coming from an apartment threading component always has to be invoked from the same thread. LabVIEW having started on Mac and then ported to Windows 3.1 heavily inherited those single threading issues from both OSes. It was "solved" by having a so called root loop in LabVIEW that dispatched all OS interactions such as mouse, keyboard and OS events to whatever component in LabVIEW needed them. When LabVIEW got real multithreading support in LabVIEW 5 this root loop was maintained and located in the main thread that the OS starts up when launching LabVIEW. It is also the thread in which all GUI operations are executed. Most ActiveX components never supported anything more than apartment threading as that kept development of the component more simple. LabVIEW does honor that by executing the methods and property calls for those ActiveX components from the main thread (called usually UI Thread). That can have certain implications. Out of context or remote ActiveX components are thunked by Windows through the OLE RPC layer and the according message dispatch for this OLE thunking is executed in the Windows message dispatch routine that is called by LabVIEW in its root loop. Any even slight error in the Windows OLE thunking, ActiveX component or the LabVIEW root loop in how to handle the various events properly can lead to a complete lockup of the message dispatch and with that the root loop of LabVIEW and absolutely nothing works anymore. Theoretically other threads in LabVIEW can continue to run and actually do, but without keyboard, mouse and GUI interaction an application is considered pretty dead by most users. .Net is less suspicable to such problems but not entirely free as it still inherits various technologies from COM and OLE deep down in its belly. My personal issue with both is that they involve a very complex infrastructure in addition to the LabVIEW runtime that is: 1) has to be loaded on application startup delaying the startup even more 2) while very easy to use when it works, almost impossible to understand when things go wrong 3) being a Microsoft technology has a big chance of being obsoleted or discontinued when Microsoft tries to embrace the next hype that hits the road (DDE while still present is dead, OLE/COM superseded by ActiveX, and ActiveX is highly discouraged in favor of .Net now, Silverlight has been axed already) Betting on those technologies has had a very good chance of being siderailed so far, as NI had to find out several times including with Silverlight the last time.
-
That's right. Before LabVIEW 5, Booleans were 16 bit integers and Boolean arrays were packed into 16 bit integers too. That had however several performance penalties in several places and was anything but standard to anything else except some MacOS precedences, so was dropped in favor of a more standard implementation with bytes for each boolean. Reality is that there is no such thing as a perfect implementation for every possible use case. Your packed array solution you assumed LabVIEW to use, had severe performance limitations in LabVIEW 3 and was therefore dropped in favor of a more common way that did consume more memory when boolean arrays were involved but made conversion between boolean arrays and other formats simpler and generally more performant. But there is a simple rule: If you are concerned about this kind of performance optimization then don't use boolean arrays at all! They involve memory manager operations all over the place and those are magnitudes slower than a few hundred CPU cycles with which you can do just about any boolean operation you might ever dream up. For such performance optimization you need to look at properly sized integers and do boolean arithmetic on them. And if your "boolean array" gets over 64 bits you should definitely look at your algorithme. Most likely you have chosen the easy path of working with boolean arrays in order to not have to think about proper algorithme implementation but if you go that path, worrying about sub microseconds optimizations is definitely a lost battle already. One of the worst performance killers with the packed boolean array implementation in LabVIEW 3 was autoindexing. Suddenly a lot of register shifts and boolean masking had to be done on every autoindex terminal for a boolean array. That made such loops magnitudes slower than a simple adress increment when using byte arrays for boolean arrays.
-
Running LabVIEW executable as a Windows Service
Rolf Kalbermatter replied to viSci's topic in LabVIEW General
srvany is not the same as sc.exe or srvinstw.exe. The first is a sort wrapper that allows to wrap a standard executable in a way that it will run as a service and at least properly respond to service control requests (although things like stopping the service really will more or less simply kill the process so it isn't a very neat solution, but it works for many simple things. sc.exe is THE standard Windows command line tool to the service control manager almost for as long as Windows services exist. The interface to the service control manager is really located in advapi32.dll and can be also directly called from applications (but many functions will require elevated rights to work successfully). Not sure about Srvinstw.exe but the standard way of interacting with the service control manager through a GUI is through the MMC snapin for services which you can reach through the Control Panels->Administrative Tools. While srvany works reasonably well for simple tasks, for more complicated services it is really better to go through the trouble of integrating a real interface to the service control manager in an application meant to be executed as a service. -
I second everything Matt said here.
-
Not really removed LabVIEW 2013 is the first to officially support any of the Linux RT targets. So it was simply never added.
-
Yeah after rereading it a few more times I got the feeling that something like that had happened. I think it's not worth the effort. Things are pretty clear now and editing posts substantially after the fact is something that is generally not considered helpful nor correct. We all write sometimes things that turn out later to be badly written. For myself I usually limit editing for posts to fixing typos and adding an occasional extra information that seems useful.
-
If you look at the link from Fab, you can see that you should probably use LocalAppDataFolder instead if you want the non-roaming version of it.
- 4 replies
-
- 2
-
- build specification
- installer
-
(and 3 more)
Tagged with:
-
And here you walk into the mist! LabVIEW is written in C(++) for most of it, but it doesn't and has never created C code for the normal targets you and I are using it with. Before LabVIEW 2010 or so, it translated the code first into a directed graph and then from there directly into machine code. Since then it has a two layer approach. First it translates the diagram into DFIR (Dataflow Intermediate Representation) which is a more formalized version of a directed graph representation with additional features. Most of the algorithme optimization including things like dead code elimination, constant unfolding and many more things are done on this level. From there the data is passed to the open source llvm compiler engine which then generates the actual machine code representation. At no point is any intermediate C code involved, as C code is notorously inadequate for representing complex relationships in an easy to handle way. There is a C generator in LabVIEW that can translate a LabVIEW VI into some sort of C++ code. It was used for some of the early embedded toolkits such as for the AD Blackfin Toolkit, Windows Mobile Toolkit, and ARM Toolkit. But the generated code is pretty unreadable, and the solution proofed very hard to support in the long run. You can still buy the C Generator Addon from NI which gives you a license to use that generator but its price is pretty exorbitant and active support from NI is minimal. Except under the hood for the Touch Panel Module in combination with an embedded Visual C Express installation it is not used in any currently available product from NI AFAIK.
-
Unfortunately I haven't found much time to work on this in the meantime. However while the ping functionality of this library was a geeky idea that I persuaded for the sake of seeing how hard it would be (and turned out to be pretty easy given the universal low level API of this library), I don't think it has much merit in the context of this library. In order to implement ping directly on the socket library interface one is required to use raw sockets which are a privileged resource that only processes with elevated rights can create. I'm not seeing how it would be useful to start a process as admin just to be able to ping a device. And someone probably will argument that the ping utility in Windows or Linux doesn't need admin privileges. That is right, because that is solved under Linux by giving the ping utility special rights during installation for accessing raw sockets and under Windows through a special ping DLL that interfaces to a privileged kernel driver that implements the ping functionality. At least under Windows this library could theoretically interface to that same DLL, but its API doesn't really fit in easily in this library and I didn't feel like creating a special purpose LabVIEW API that breaks the rest of the library concept and only is possible under Windows anyhow.
-
Why do graphs have autoscaling on their axis? The IMAQ Vision control has the same for the range. Certainly not a bug but sometimes not what you expect. If you show the Image Information you also see the range that the image currently uses. And you can change that directly in there or through properties.
-
I would guess this is only true if you use compiled code separated from the VI. Otherwise the according binary compiled code resource in the VI will be very much different and definitely will have some indication of bitness. Still for the rest of the VI it most likely indeed doesn't matter at all, especially for an empty VI. There might be certain things on the diagram that change but I would guess almost all of it is code generation related, so would actually only affect the VI itself if you don't use seperate compiled code.
-
I'm not really understanding what you are saying. For one, the case that UI controls configured to coerce their values would do the coercion also when the VI is used as subVI: that was true in LabVIEW 3.0 and maybe 4.0 but was removed after that because it was considered indeed a bad idea. I should say that quite a few people got upset at this back then , but you seem to agree that this is not desirable. As to using the fact that the Value Change event does not get triggered to allow you to alert a user that there was a coercion??? Sorry but I'm not following you here at all. How can you use an event that does NOT occur, to trigger any action?? That sounds so involved and out of our space-time continium that my limited brain capacity can't comprehend it. But you can't use the fact that the value is the same to mean that a limit has been reached. The user is free to go to the control, type in exactly the same value as is seen already, and hit enter or click on something else in the panel and the Value Change event will be triggered. Value Changed simply doesn't mean that there is a different value, only that the user did something with the control to change the control to the same or a different value. Sounds involved I know but there have been many discussions both in the LabVIEW devlopment team as well as in many other companies who design UI widget libraries and they generally all agree that you want to trigger on user interaction with the control for maximum functionality and leave the details about if an equal value should mean something or not to the actual implementor. The name for the event may indeed be somewhat misleading here. In LabWindows CVI NI used the VALUE_COMMIT term for the same event. However I suppose the word "commit" was considered to technical for use in LabVIEW.
-
I'm not sure I would agree her fully. Yes security is a problem as you can not get at the underlaying socket in a way that would allow to inject OpenSSL or similar into the socket for instance. So TCP/IP using LabVIEW primtives is limited to unencrypted communication. Performance wise they aren't that bad. There is some overhead in the built in data buffering that consumes some performance, but it isn't that bad. The only real limit is the synchronous character towards the application which makes some high throughput applications more or less impossible. But that are typically protocols that are rather complicated (Video streaming, VOIP, etc) and you do not want to reimplement them on top of the LabVIEW primitives but rather import an existing external library for that anyways. Having a more asynchronous API would be also pretty hard to use for most users. Together with the fact that it is mostly only really necessary for rather complex protocols I wouldn't see any compelling reason to spend to much time on that. I worked through all this pretty extensively when trying to work on this library. Unfortunately the effort to invest into such a project is huge and the immediate needs for it were somewhat limited. Shaun seems to be working on something similar at the moment but making the scope of it possibly even bigger. I know that he prefers to solve as much as possible in LabVIEW itself rather than creating an intermediate wrapper shared library. One thing that would concern me here is implementation of the intermediate buffering in LabVIEW itself. I'm not sure that you can get a similar performance there than doing the same in C, even when making heavy use of the In-Place structure in LabVIEW.
-
Cool stuff!
-
Hooovahh mentioned it on the side after ranting a bit about how bad the NI INI VIs were , but the Variant Config VIs have a "floating point format" input! Use that if you don't want the library to write floating point values in the default %.6f format. You could use for instance %.7e for scientific format with 7 digits of precision or %.7g for scientific format with exponents of a multiple of 3.
-
I wonder if this is very useful. The Berkeley TCP/IP socket library, which is used on almost all Unix systems including Linux, and on which the Winsock implementation is based too, has various configurable tuning parameters. Among them are also things like number of outstanding acknowledge packets as well as maximum buffer size per socket that can be used before the socket library simply blocks any more data to come in. The cRIO socket library (well at least for the newer NI Linux systems, the vxWorks and Pharlap libraries may be privately baked libraries that could behave less robust) being in fact just another Linux variant certainly uses them too. Your Mega-Jumbo data packet simply will block on the sender side (and fill your send buffer) and cause more likely a DOS attack on your own system than one on the receiving side. Theoretically you can set your send buffer for the socket to 2^32 -1 bytes of course but that will impact your own system performance very badly. So is it useful to add yet another "buffer limit" on the higher level protocol layers? Aren't you badly muddying the waters about proper protocol layer respoinsiblities by such bandaid fixes? Only the final high level protocol can really make any educated guesses about such limits and even there it is often hard to do if you want to allow variable sized message structures. Limiting the message to some 64KB for instance wouldn't even necessarily help if you get a client that maliciously attempts to throw thousends of such packets at your application. Only the final upper layer can really take useful action to prepare for such attacks. Anything in between will always be possible to circumvent by better architected attack attempts. In addition you can't set a socket buffer above 2^16-1 bytes after the connection has been established as the according windows need to be negotiated during the connection establishment. Since you don't get at the refnum in LabVIEW before the socket has been connected this is therefore not possible. You would have to create your DOS code in C or similar to be able to configure a sender buffer above 2^16-1 bytes on the unconnected socket before calling the connect() function.
-
Not really like this! My code uses generally a header of a fixed size with more than just a size value. So there is some context that can be verfied before interpreting the size value. The header usually includes some protocol identifier, version number and a message identifier before specifying the size of the actual message. If the header doesn't evaluate to a valid message the connection is closed and restarted in the client case. For the server it simply waits for a reconnection from the client. Of course if you maliciously create a valid header specifying your ridiculous length value it may still go wrong, but if you execute your client code on your own machine you will probably run into trouble before it hits the TCP Send node. I usually don't go through the trouble of trying to guess if a length value might be usefull after the header has been determined to be valid. Might as well consider that in the future, based on the message identifier, but if you have figured out my protocol you may as well find a way to cause a DOS attack anyways. Not all messages types can be made fixed size and imposing an arbitrary limit on such messages may look good today but bite you in your ass tomorrow. And yes I have used white listing on an SMS server implementation in the past. Not really funny if anyone in the world could send SMS messages through your server where you have to pay for each message.
-
They may not be meant to leave your C function but your question was about if you should catch them or any others or not. As to Structured Exception Handling, that's indeed the official term for the Windows way although I have seen it used for other forms of exception handling. The Windows structured exeception handling is part of the Windows kernel and can be used from both C and C++, but Microsoft nowadays recommends to use the ISO C++ exception handling for C++ code for portability reasons. But C++ exceptions on the other hand is done in the C++ compiler itself for the most part, It may or may not be based on the Windows SEH. If it is not, you can't really mix and match them easily together. Specifically this page shows that /EHsc will actually cause problems for the destruction of local objects when an SEH is triggered and that you should probably use /EHa instead to guarantee that local C++ objects are properly deallocated during stack unwinding of the SEH exception. This page shows that you may have to do actual SEH transformation in order to get more context from an SEH exception in a C++ exception handler. In general it seems that while C++ can catch SEH (and SEH transformation can be used to gain more detailed information about specific SEH exceptions into your C++ exception), the opposite is not true. So if LabVIEW uses SEH around the Call Library Node, which I believe it does, it will not really see (and catch) any C++ exceptions your code throws. It also can't rely on the fact that the external code may use a specific C++ exception model since that code may have been compiled with different compilers including standard C compilers which don't really support C++ exception handling at all. It may be even useful to add __declspec(nothrow) to the declaration of your exported DLL functions to indicate that they do not throw exceptions to the calling code. But I'm not really sure if that makes a difference for the code generation of the function itself. It seems to be mostly for the code generation of callers who can then optimize the calling code to account for the fact that this function will never throw any exceptions. But maybe it will even cause the compiler to generate warnings if it can determine that your code could indeed cause termination by uncatched exceptions in this function. If your code is a CPP module (or C but set in the compiler options to compile as C++) the EH options will emit code that enables unwinding the stack and when you use the EHa option to also cause object destruction for SEH. However if your code isn't really C++ this indeed probably won't make any difference. C structures and variables don't have destructors so unwinding the stack won't destruct anything on the way except try to adjust the stack properly as it walks through the various stack frames. As to what to catch, I would tend to only really catch what my code can generate including possible library functioins used and leave SEH alone unless I know exactly what I'm doing in a specific place. Generally catching the low level SEH exceptions is anyhow a rather complicated issue. Catching things like illegal address access, division by zero execution, and similar for more than a "Sorry you are hosed dialog" is a rather complicated endeavour. Continuing from there as if nothing has happend is generally not a good idea and trying to fix after the fact whatever has caused this, most of the times not really possible.
-
It's not undifined. These exceptions are really caused by hardware interrupts and translated by Windows into its own exception handling. An application can hook into that exception handling by calling Windows API functions. If it doesn't you get the well known "Your application has caused a General Protection Fault error" or similar dialog with the option to abort your application or kill it (but not to continue) . If your C++ exceptions are caught by such an application hook depends entirely on the fact if your C runtime library actually goes to the extra effort of making its exceptions play nice with the OS exception mechanisme. And no I wouldn't know if they do and which might or might not do that.
-
Well, this is mostly guessing, but almost all functions documented in the extcode.h file existed long before LabVIEW was compiled as C++ code. Even then much of it remained as C code but just got compiled with the C++ compiler. So it is highly unlikely that any of these low level managers thow any explicit exceptions of their own. That still leaves of course the OS exceptions that are mostly directly generated from the CPU and VMM hardware as interrupts. Windows has its own exception mechanisme that predates the C++ exceptions by many years. The implementation of it requires assembly and the Windows API for it is not used by many applications explicitedly because it is somewhat difficult to handle right. Supposedly your C runtime library with exception handling would intercept those exceptions though and integrate them in its own exception handling. How well that really works I wouldn't know. Now the exception handling in the C runtime is compiler specific (and patent encumbered) so each C runtime implements its own exception handling architecture that is anything but binary compatible. Therefore if you mix and match different binary object files together you are pretty lucky if your exceptions will not just crash when crossing those boundaries. I'm not sure what LabVIEW all does around the Call Library Node. Because of the binary incompatibilities between exception handling and the fact that a C only interface can't even properly use C++ exceptions in a meaningful way I'm pretty sure LabVIEW doesn't jus add a standard try catch around the Call Library Node call. That would go completely havoc in most cases. What LabVIEW can do however is hooking into the Windows exception mechanisme, This interface is standardized and therefore doesn't suffer from these compiler difficulties. How much of your C++ exceptions can get caught like this depends entirely how your C++ runtime library is able to interact with the Windows exception interface. If it can translate its exceptions from and to this interface whenever it traverses from Windows API to C++ runtime and back from that when leaving the code module (your DLL) then it will work. Otherwise you get all kind of messed up behaviour. Of course a C++ exception library that couldn't translate those low level OS exceptions into its own exceptions would be pretty useless. So that is likely covered. Where it will get shaky is about explicit C++ exceptions that are thrown in your code. How they translate back into the standard Windows exception mechanisme I have no idea. If they do it's a marvelous piece of code for sure, that I would not want to touch for any money in the world . If they don't, well....!!! C++ exceptions are great to use but get a complete fiasco if you need to write code that spans over object modules created in different C compilers or even just versions. C++ code in general suffers from this in a great way, as ABI specifications including class memory layouts are also compiler specific.
-
Who would throw them then? When LabVIEW calls your function, the actual thread is really blocked for your function and there will be nothing else executing in that thread until you return from your function. So not sure what you mean with this. Exception handling is AFAIK thread specific so other threads in LabVIEW throwing exceptions should not affect your code. Otherwise exception handling would be pretty meaningless in a multithreaded application.
-
Several remarks first: 1) You should put lv_prolog.h and lv_epilog.h includes around the error structure definition to make sure the element alignment is correct. 2) You don't show the definition of WrpExcUser exception class but if it derives from some other exception class it will catch those too. 3) Your attempt to generalize the code for catching the exception through a function pointer, so you can reuse it in multiple functions, is in principle not bad, but you loose the ability to call functions which take parameters. Not really very interesting for a bigger library. I suppose that is why you made it a template so you can replace the function pointer with specific definitions for each function but that tends to get heavy and pretty hard to maintain too. I'm not sure what your question about default error checking should mean. As far as external code goes, you as implementor define what the error checking is and how it should be performed. It's a pipe dream to have template error checking in all places the same way, reality simply doesn't work that way. Sometimes an error is fatal, sometimes it is temporary and sometimes it is even expected. Your code has to account for this on a case to case situation. As far as calling code from LabVIEW goes, unless you disable the error handling level in the Call Library Node configuration, LabVIEW will wrap the call into an exception handler of its own and return an according error in the error cluster of the Call Library Node. The reported error is not very detailed as LabVIEW has to use the most generic exception class there is in order to catch all possible exceptions but it is at least something. So generally if you don't want to do custom error handling in the external code you could leave it all to LabVIEW.
-
Well, generally if your DLL uses global variables, one of the easier ways to guarantee that it is safe to call this DLL from LabVIEW more than once is to set all the Call Library Nodes calling any function that reads or writes this global to run in the UI thread. However in this case since the callback is also called from the internal thread, that is not enough to make it strictly safe. The callback after all only makes any sense when it is called from another context than your LabVIEW diagram. Even in this trivial example that isn't really meant to show a real use case but just how to use the PostLVUserEvent() function, the callback is called from a new thread inside the DLL, and therefore can access the global variable at the same time as your LabVIEW diagram. Now these are all general rules and the reality is a bit more complicated. In this case, without some alignment pragmas that put the global variables on an unaligned address, each read of the two global variables inside the callback is really atomic on any modern system. Even if your LabVIEW code calls the initialize function at exactly the same time, the read in the callback will either read the old value or the new one but never a mix of them. So with careful safeguarding of the order of execution and copying the global into a local variable inside the callback first before checking it to be valid (non-null) and using it, it is maybe not truely thread safe but safe enough in real world use. Same about the b_ThreadState variable which is actually used here as protection and being a single byte even fully thread safe for a single read. Still, calling ResetLabVIEWInterrupt and SetLabVIEWInterrupt in a non sequentual way (no strict datadependency) without setting the Call Library Nodes to UI thread could cause nasty race conditions. So you could either document that these functions can't be called in parallel ever to avoid undefined behaviour or simply protect it by setting them to run in the UI thread. The second is definitely more safe as some potential LabVIEW users may not even understand what parallel execution means. The original 8051 was special in that it had only 128 byte of internal RAM and the lowest bank of it was reserved for the stack. The stack there also grows upwards while most CPU architectures have a stack that grows downwards. Modern 8051 designs allow to have 64 kb of RAM or more and the stack simply is in the lowest area of that RAM but not really in a different sort memory than the rest of the heap. As to PUSH and POP that are still the low level assembly commands used on most CPUs nowadays. Compiled C code still contains them to push the parameters on the stack and pull (pop) them from it inside the function.