Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,871
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. Sorry, I had been working on another post and for some reason the link to the clipboard library must have stayed in my clipboard (how is that about the clipboard causing a problem about a link to a post about the clipboard ) despite my different intentions. It should be fixed now.
  2. No! Because ActiveX Controls have traditionally not allowed overlapping at all. Maybe it would be theoretically possible nowadays, but most ActiveX controls assume simply that they are the one and only control to draw to the screen area they have been constricted to and won't even bother otherwise, simply because at least in older ActiveX standards there was simply no good way to do overlapping. Maybe later versions added support for that, but since ActiveX is already an obsolete technology again, since MS came out with the next Übercool technology called .Net nobody probably bothered to look into that very much anyhow. The ActiveX container in LabVIEW certainly won't allow support for overlapping controls, so even if ActiveX has provisions for that in its latest released version (which I actually still doubt) and you could find a control that supports that it would make no difference when that control is used in LabVIEW. I don't know your VI sensor mapping example but if overlapping controls are used there then they most likely make use of the Picture Control with the 3D OpenGL based drawing functionality. Look in your LabVIEW palette under Graphics & Sound->3D Picture Control. It's not high performance 3D scene creation and rendering and not at all a 3D wire frame graph but it may be able to do what you want to do.
  3. Well the OpenG pipe library has a System Exec equivalent that also returns besides the pipes for the stdIO and optionally stdErr the resulting task ID. And then another function which can be used to kill that task. Works like a charm for me for starting up command line tools that have no built in remote control to quit them. The OpenG Pipe library has not been released as a package but can be found here as a build OGP package.
  4. I have to agree with Shaun. While the text may seem to indicate that the driver is MOSTLY thread safe, it isn't completely. The three mentioned calls need to be made from the same thread and the only way to easily guarantee that in LabVIEW is by calling them in the UI Thread. And incidentially I would say that are probably also the only functions where you could possibly gain some performance if you could call them multithreaded. So I would not bother to change the VI library at all.
  5. If the VI already crashes when it is opened then something has been corrupted. It could be one of the subVIs it uses or the actual VI itself. Corruptions while not very common do happen and the only way to deal with them is by backing up your work regularly. If the VI crashes as soon as you start it, then it may be calling some invalid method, or into a driver which has gotten corrupted. In the driver case reinstalling the driver software might help.
  6. Mostly C and a little C++. Did some Python and Lua programming in the past. Having gotten an Android mobile device now I'm looking into some Java programming too :-)
  7. LabVIEW uses for many version specific things a 4 byte integer which encodes the version in nibbles. And newer LabVIEW versions compress almost everything in the VI to safe space and load time, but this also makes any internal information very hard to read.
  8. Yep, that are the nodes (note: not VI's). I'm aware that they won't help with UI display but only in reading and writing UTF8 files or any other UTF8 data stream in or out of LabVIEW. Display is quite a different beast and I'm sure there are some people in the LabVIEW development departement, biting nails and ripping out their hair, trying to get that working fine.
  9. Not sure why you say that. Just because it is not all obfuscated or even binarised doesn't mean it is a joke. The OGP format was the first format devised and Jim, me and I think someone else whose name I can't come up right now, came up with it after I had looked at ZLIB and according ZIP library and decided to create a LabVIEW library to deal with ZIP formats. The idea for the spec file format I did in fact derive from the old application builder configuration format and it proved flexible, yet simple enough to deal with the task. Jim came with the idea to put the spec file inside the archive under a specific name, similar to how some Linux package managers did it back then. The rest was mostly plumbing and making it work and it is still in principle the same format as in the beginning. The VIPC is just another similar format to track down packages more easily. JKI goes to great lengths to obfuscate the software in the VIPM but they are to be applauded to not have gone the path of obfuscating the actual file formats.
  10. While the nodes I spoke about probably make calls to the Windows API functions under Windows, they are native nodes (light yellow) and supposedly call on other platforms the according platform API for dealing with Unicode (UTF8 I believe) to ANSI and v.v. The only platforms where I'm pretty sure they either won't even load into or if they do will likely be NOPs are some of the RT and embedded platforms Possible fun can arise out of the situation that the Unicode tables used on Windows are not exactly the same as on other platforms, since Windows has slightly diverged from the current Unicode tables. This is mostly apparent in collation which influences things like sort order of characters etc. but might be not a problem in the pure conversion. This however makes one more difficulty with full LabVIEW support visible. It's not just about displaying and storing Unicode strings, UTF8 or otherwise, but also about many internal functions such as sort, search etc. which will have to have proper Unicode support too, and because of the differences in Unicode tables would either end up to have slightly different behavior on different platforms or they would need to incorporate their own full blown Unicode support into LabVIEW such as the ICU library to make sure all LabVIEW versions behave the same, but that would make them behave differently to the native libraries on some systems.
  11. Yes you can't avoid a buffer copy but I wasn't sure if your buffer copy VIs did just a single buffer copy or if your first VI does some copying too. But the main reason to do it in this way is simply to keep as much of the low level stuff in the DLL, especially since you have that DLL already anyhow. And to be honest, I believe your original solution could have a bug. You pass in a structure that is on the stack. The user event however is processed asynchronous so at the time the event is read the stack location can have long been invalidated and possibly overwritten with other stuff. Bad, Bad! especially since both information relate to a memory buffer. And declaring the variable static is also not a solution since a new event could be generated before the previous is processed. Note: It may work if PostLVUserEvent makes a copy of the data, leaving the ownership of the data object to its caller. It could do that since the event refnum stores internally the typecode of the data object associated with it, but I'm not 100% sure it does so.
  12. If you changed the user Event to be of a string instead of the cluster and changed the code in the Callback like this: DLL void iCubeCallback (char *pBuffer, int IBufferSize, LVUserEventRef *eventRef){ LStrHandle *pBuf = NULL; MgErr err = NumericArrayResize(uB, 1, (UHandle*)pBuf, IBufferSize); if (noErr == err) { MoveBlock(pBuffer, LStrBuf(**pBuf), IBufferSize); LStrLen(**pBuf) = IBufferSize; PostLVUserEvent(*eventRef, (void *)pBuf); } } This way you receive the data string directly in the event structure and don't need to invoke buffer copy functions there. No not really. Supposedly there would be probably some functions in the latest LabVIEW versions, but without at least some form of header file available it's simply to much of trial and crash .
  13. No you need LabVIEW 2010 for this! But I wouldn't expect to much of this. Creating a DLL in LabVIEW and using it from the C environment is certainly going to be an easier solution.
  14. While this would likely work I consider it a rather complicated solution which makes the whole deployment of the library more complicated. Writing a wrapper interface for the DLL (to add the extra error cluster parameter to all functions) is indeed not a trivial exercise (mainly in terms of time, not so much in terms of complication). Especially with open source it also poses the question if you want to modify the original sources or add a separate wrapper. However this is work that has to be done once at creation time and won't bother the end user with complicated setups and non-intuitive usage limitations. If it is only a for a solution for a build once and forget application then it's feasable, otherwise I would definitely try to invest a little more time upfront in order to make it more usable in the long term.
  15. I did test it but didn't make extensive tests. It seemed to work for any combination of OpenG compress -> ALZIP / 7-ZIP uncompress and vice versa as well as compress and uncompress in OpenG. This was enough for me to provide the parameter on the VI interface. I can't exclude the possibility that the generated password might not be compatible with certain other ZIP utilities, and I have also stopped using ALZIP a year or so ago, since they were getting worse and worse in advertisement and eventually tried to push an upgrade to a once free version on the users that was only a trial version and had to be purchased to continue using.
  16. Maybe a restart was necessary. The LabVIEW menu palettes are a bit tricky and while VIPM should attempt to synchronize those palettes after an install, something may have prevented the synchronization to work. A restart of LabVIEW will solve that.
  17. This won't work! While Ledata runs Debian, which is a Linux variant, it also uses an ARM processor (which is the main reason it can be so cheap). LabVIEW for Linux is exclusively x86 (32bit). So the only way to get LabVIEW working on there is to buy the LabVIEW Embedded for ARM Toolkit and create the according interface to integrate the Ledato tool chain. An interesting project but not a trivial one, and certainly not cheap when you look at the price of the LabVIEW Embedded Toolkit.
  18. There used to be a library somewhere on the dark side that contained them. It was very much like my unicode.llb that I posted years ago and which called the Windows WideCharToMultibyte and friends APIs to do the conversion but also had extra VIs that were using those nodes. And for some reasons there was no password, eventhough they usually protect such undocumented functions strictly. I'll try to see if I can find something either on the fora or somewhere on my HD. Otherwise, using Scripting possibly together with one of the secret INI keys allows one to create LabVIEW nodes too, and in the list of nodes these two show up too.
  19. To my knowledge there is no documented C API to the LabVIEW queues. However there is this function PostLVUserEvent() that basically sends back an user event to LabVIEW. This can then be catched with an event structure. Try searching for PostLVUserEvent either here and on the NI LabVIEW forum and you should get a whole bunch of discussions and even an example or two from where you can pick up.
  20. It's likely to get enabled when Microsoft ports their MSI software to Linux. In other words, never!!!
  21. Likely because they make use of the undocumented UTF 16 nodes that are in LabVIEW since about 8.6. And these nodes are likely undocumented because NI is still trying to figure out how to expose that functionality to the LabVIEW programmer without bothering him with underlying Unicode difficulties including but certainly not limited to UTF16 on Windows v. UTF32 on anything else (except those platforms like embedded RT targets were UTF support usually is not even present, which is an extra stumble block to make generic UTF LabVIEW nodes]). Of course they can include the IBM ICU library or something along that line but that is a noticable extra size for an embedded system. It all depends what you consider as "proper". Those nodes will likely make it into one of the next LabVIEW versions. However to support Unicode in every place including the user interface (note LabVIEW supports proper multibyte encoding already there) will be likely an exercise with many pitfalls, resulting in an experience that will not work right in the first few versions, and might even cause troubles in non unicode use cases (which is likely the main reason they haven't really pushed for it yet). Imagine your normal UI's suddenly starting to misbehave because the unicode support messed something up, and yes that is a likely scenario, since international character encoding with multibyte and unicode is such a messy thing.
  22. Actually the proper fix would be to use Pointer sized (signed or unsigned) integer instead. Then you can forget about the conditional compile node. This CLN datatype is available since LabVIEW 8.5.
  23. Actually it is a bit more complicated (or not) than that. On all 16 bit systems int used to be 16 bit and on 32 bit systems it is 32 bit. So far so good. For 64 bits things get a bit messy. int here is still always 32 bit (well for the majority of systems, some more exotic systems use actually 64 bit int's) as detailed here (Specific C-language data models). The most interesting part is however with longs where Linux 64 bit uses 64 bits, while Microsoft Windows chose to use 32 bit longs.Linux is more forgiving to code that casts pointers into longs while Windows is more forgiving to code that assumes sizeof(long) == sizeof(int). Both assumptions have of course no place in modern software, but many programmers can be sometimes a bit lazy.
  24. Well you can convert C algorithmes quite easily into the formula node as that one supports a subset of C. For whole C programs there is simply no way to translate that in a meaningful way into a LabVIEW program by automatic and in fact even translation by humans is mostly an exercise in vain as the runtime concepts are quite different. And if a human can't do it how could you come up with an algorithm that does it automatically. I'm not saying that you can't rewrite a C program in LabVIEW but that is not translation but simply reading specs (here from the C program) and writing a LabVIEW program from scratch. C is in no way as formal and strict as more modern design paradigmas such as UML etc. and I haven't seen LabVIEW code generators that can translate such design documents readily into LabVIEW VIs. If it is indeed simply the code generation part from the embedded Toolkit (and I'm almost 100% sure it is), then all I can say is: The code generation works but it ain't pretty to look at. Personally I don't see much use in generating simply C code. The embedded Toolkit makes some sense when used with a preconfigured tool chain for a certain target but just generating C code from LabVIEW code is not much more than for the wow effect. Converting a simple VI algorithme into C is quite a bit leaner and meaner when done by hand and converting complex programs is likely an exercise in vain as there are to many dependencies on the underlying runtime and OS environment that this could be done in a really generic way .
  25. That is logical. abort() is similar to exit() which is simply a C runtime function to - yes indeed - abort the current process. Exit does try to do some extra clean up before returning to the OS, but abort simply yanks out the floor under the feet of a process and hopes that the OS will be able to clean up the mess that remains. Your GetLastError function might have threading issues. Unless you configured your Call Library Node to execute the DLL function in the UI thread LabVIEW will call that function in any of the threads in the current subsystem and that can be in modern LabVIEW versions up to 8 threads or so. So there might be a problem in storing the error code and trying to retrieve it with another function, as that second function may be called from a different thread. If you store the error in a thread local storage, as Windows does with its GetLastError() you have a very small chance to retrieve the same error value than what was written by the previous function call, and unless you force data flow strictly chances are completely off. Personally I would add an error handler that takes the current parameters and a LabVIEW error cluster as parameters and stuffs the first into the second and also returns an error code (the same stuffed into the cluster) which you can return from the function call to LabVIEW. Every DLL call gets an extra parameter that receives the error cluster and this parameter gets passed to the error handler function whenever it is called. Not knowing the library nor its circumstances I do not know, but if it is truely dual licensed you would be free to choose the license you like for your distribution. The dual licensing here was likely done so that (L)GPL projects could use it, since especially the GPL is considered by some almost incompatible to anything but itself. BSD would seem to me so liberal that it should be easily usable by GPL program, but I believe some GPL critics think that that is not the case.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.