Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. And the OpenG ZLIB library supports extracting files directly into memory strings rather than to a disk file!
  2. Simple: Disable the "Allow user to Close" setting in the VI settings dialog. More Involved: Add the VI->Panel Close? filter event to your event structure and pass TRUE to discard, but terminate your event handling loop anyhow to return to the caller.
  3. A full tank of gas will not easily explode, although you sure can end up roasted in a big fire. It's the almost empty tank of gas that will explode in a very nasty way!
  4. You can't to that! The error cluster of the CLFN is for reporting runtime errors from LabVIEW in trying to load the DLL and execute the function, including possible exceptions when you have configured the CLFN to use high error reporting. If you want to pass error information from your function to the LabVIEW diagram you have to do it through a function parameter or function return value. I have done both. Method1: When an error code is returned by all functions I have a common error handler VI that is placed after each CLFN and converts this error code into a LabVIEW error cluster. Method2: Or you can pass the Error Cluster as extra parameter: #pragma pack(1)typedef struct { LVBoolean status; int32 code; LStrHandle message;} ErrorCluster;#pragma pack()static MgErr FillErrorCluster(MgErr err, char *message, ErrorCluster *error){ if (err) { int32 len = StrLen(message); error->status = LVBooleanTrue; error->code = err; err = NumericArrayResize(uB, 1, (UHandle*)&(error->message), len); if (!err) { MoveBlock(message, LStrBuf(*error->message), len); LStrLen(*error->message) = len; } } return err;}MgErr MyFunction(......, ErrorCluster *error){ MgErr err = error->code; if (!error->status) { err = CallAPIFunction(.....); FillErrorCluster(err, "Error here!!!", error); } return err;} I then use this error cluster to feed through the VI, not the error cluster from the CLFN itself. The CLFN error is useful during development and debugging to see possible errors and understand why it wouldn't work, but for most use cases, once the CLFN call has been tested and debugged, the node should not return any runtime error anymore.The function call however might, otherwise the whole exercise to pass the error cluster as parameter is quite senseless .
  5. The underlaying getaddrinfo() at least on Windows, will return all local network adapter addresses and LabVIEW will pick the first one from that list to use. So it binds to the actual default IP address of your machine. When you disconnect the cable this IP address (and adapter) gets invalid and your connection is working through an unconnected socket, which of course gives errors. When the cable is not connected at the time the address is resolved and the socket is bound, the first adapter returned will be a different one, it could be the loopback interface or your WIFI interface. So connecting the cable and disconnecting it does not have any influence on this connection. Using 127.0.0.1 or localhost explicitedly will bind to the loopback adapter and that one is valid for as long as Winsock is working, aka. until you shutdown your computer.
  6. As always with new versions, during the NI week conference in the first or second week of August. Expect to be able to download it during or after NI week and to see MSP shipments of disks a few weeks later. I didn't install 2010 and waited for SP1 but can't say that it seems slower or less stable than 2009 or 8.6, except that the installation took almost forever, (like 12 hours and more for the developer suite installation and another evening/night for the device drivers and then another evening/night for the SP1 update). I account it largely to the three year old Windows installation that holds among other things all versions of LabVIEW since 5.1.
  7. This is a hack, since different language versions of Windows will probably call this differently. You Americans need to realize that the world doesn't only exist of English speaking people . And of all people I would have thought you Michael should know. A much better way would be something along these lines. Tested only on 32 Bit Windows XP for the moment but in theory it should work right on all versions of Windows XP and newer. LV2010 WINAPI Is 64 Bit OS.vi
  8. On my computer it took more like 8 hours for the LabVIEW 2010 installation (full developer suite installation with most Toolkits included), another 6 hours for the device driver installation and then again 6 hours for the SP1 installation and after that another 4 hours or so for the LabVIEW Embedded for ARM Evaluation This is on a "very old" Dual Core 2.2GHz Notebook with Win XP SP3 but has just about any LabVIEW version and accompagning tools installed since LabVIEW 6.0 and quite a bit of other software so it may be that the MSI database overload is quite a lot of the problem for this. My new upcoming computer should be a bit faster and I plan to use separate VM installations for the different older LabVIEW versions, and depending on the speed maybe for all. On the current machine with VMWare the performance is quite bad for running any LabVIEW installation inside a VM.
  9. Why bother now with it? When I started with it it was for LabVIEW 5.1 or so. No option there. Then we wanted to keep it workable in 6.0 and later 6.1 at earliest. No Conditional Compile structure and even in the versions where it is available it has some issues breaking a VI if something in the disabled cases can't be compiled. And the conditional disable structure wouldn't take care of everything anyhow. First you don't want to write different code in the VI for each platform. It's much easier to keep that in one C file than in many VIs. And since there are usually always some things that are easier to translate in C into LabVIEW friendly datatypes than trying to make some pointer voodoo in the diagram, that C file is already there anyhow. Second there are platform specific distribution issues anyhow such as the need to archive the MacOS shared library resource, since you otherwise loose the resource fork when building the package on a non Mac computer and that makes the shared library useless (it looses most of the information about how to load the library). Also why install shared libraries into a system that are useless on that system? Just install whatever is necessary and leave the rest away. I'm not so fond of Windows 7 because they hid lots of the more advanced settings quite effectively. Will be some time to find them.
  10. Even in LabVIEW 2010 with private properties enabled there appears to exist no such possibility.
  11. Well, the DLL has to be the correct one for the actual LabVIEW platform of course. But since OpenG ZLIB is distributed as OpenG package, the package installer can make sure that the correct DLL is installed depending on the current LabVIEW version and platform. But what I want to avoid is any platform specific settings in the VI interfaces to the DLL. That would make distribution and maintenance of the library rather more complicated. I don't have a seperate wrapper DLL but have combined all the code (zlib, minizip, and wrapper code) into one library. This library is compiled into whatever platform shared library format is required including Win32, (Win64 hopefully soon), Mac OSX, Linux, and VxWorks 6.1 and 6.3. All of them are included in the ogp, with the MacOS X shared library being zipped up first to avoid loosing the resource fork of the files, and then the OGPM or the VIPM takes care to install the one that is required for the current LabVIEW version the package is installed into (and unzips the library through a custom post install step in the package for the MacOS X plaform). All the VIs and other help files are supposed to be platform independent and stay that way if possible at all. And the wrapper code is where I have spend some time in to make that independence happen. And the delivery takes a little longer since I went for a Dell Latitude machine. Also there are company internal delivery paths that add some time to this too .
  12. It's not "Other Thread 1" but "Other 1" execution system really. There is a difference in that since LabVIEW 7.0 or so, an execution system has by default assigned more than one thread each. If your code is truely thread safe and doesn't use TLS then this is no problem at all, but it means that eventhough all your VIs execute in the LabVIEW execution system "Other 1" they may not always run in the context of the same thread but anyone of the 4 or 8 threads assigned to this LabVIEW execution system.
  13. No no. I fixed the mismatch of course. Without the fix I get error 1097 from the CLN which is logical since there is an exception that gets catched by the CLN wrapper in LabVIEW. Without fix it won't even work correctly. With fix I get the weird look, although maybe it's a Win XP quirk or something.
  14. Something must be bad with this VI as it crashes on my system. Win XP SP3 fully updated. LabVIEW 2010. The third parameter of SetWindowLongA is a DWORD and that is an unsigned 32 bit value, not a signed 64bit value. And there is a strange line under the title bar where the background shines through.
  15. The zlib library is most likely not a problem. I have used the latest source code too. It's either an oversight in the Call Library Node configuration since I attempted to make the wrapper functions so that they will work in 32 bit and 64 bit without modifications to the VIs or something in the wrapper code that goes wrong. I'll take a look at it when I have installed the new machine.
  16. Well I'm soon going to get a new machine and it will most likely come with Windows 7 installed (not really happy about this but I probably have to bite the bullet at some point). One advantage will be that it is going to be 64bit and therefore I can do some debugging of my own. My first dry exercise with just compiling a 64 bit DLL, did crash on Jim's computer, so there must be something still wrong with the DLL.
  17. I have to echo that. Got it myself too some years ago and at a slightly higher price if I remember correctly, but it has been worth the money for sure. Comprehensive library of cryptography functions all implemented in pure G. And the implementation looks clean and works well as far as I could see. No rocket science as I had written my own MD5 and SHA256 routines at that time but writing even one of those functions yourself costs you way more in time and money than this whole package.
  18. Don't worry, your language is quite fine. DLL functions are only executed in the single threaded UI Execution system, if you set them like that in the Call Library Configuration. Otherwise they are called in whatever thread is currently available for the execution system, the current VI is executing in. But unfortunately the "Call in single threaded UI context" is not so much to avoid race conditions in the call sequence itself. You can already avoid that by using proper data flow, making sure functions execute sequentially instead of parallel. The issue is much more complicated than that. So could a DLL for instance use thread local storage to store state between function calls. This is a nice way to avoid passing context values between function calls and works perfectly in C, since you typically call a DLL all from the same thread. In LabVIEW this fails miserably since each execution system has a number of associated threads and LabVIEW will pick one of them which is free to call your function in. So not only different functions in your DLL can get called from different (random) threads but also the same function when called multiple times. So it is not as easy when you deal with DLL function calls. Your DLL may be not thread safe because of using global resources such as variables or hardware , but if you know about that and take care about avoiding race conditions through semaphores or simply proper dataflow there is no real problem in executing the DLL functions from other execution systems than the LabVIEW UI one. Off course this is not an option for a library you are going to distribute simply because it is almost impossible to describe the necessary limitations to most of your potential users but the most advanced ones. On the other hand your DLL may be so called thread safe but use thread local storage and that makes it almost impossible to call your DLL functions safely from other execution systems than the UI one. And there are various other issues that are complicating the whole topic. For your case it means you REALLY need to understand what are the requirements of the used DLL in terms of multithreading. UI execution system is the safe one because it avoids all these potential snake pits, but at the cost of serializing any and all calls done in it and therefore quickly causing performance issues. LabVIEW multithreaded calling of DLL functions on the other hand has quite a few potential gotchas but gives you a way of avoiding serialization of functions.
  19. Another case of someone not finding the forest because of all the trees . It's part of the LabVIEW online documentation for quite some time already. Based on that document too. Help->LabVIEW Help...->Fundamentals->How LabVIEW Stores Data. Opening the help and searching for flatten would have given you this in less time than it took to write your post.
  20. LabVIEW scripting was an adhoc feature until 2010. Meaning it was developed and implemented as people inside NI came with some use cases for tools, either internal or for inclusion with LabVIEW. As such it's support of methods and object type is quite scattered. Adding method X to object Z didn't mean that this method was automatically added to all other objects. And adding new objects type such as - let's call it for now - the binary custom control, didn't mean that all the scripting methods from similar objects were implemented and depending where it was placed in the object hierarchy it only inherited limited scripting support from more generic object types. With scripting now being an official part of LabVIEW, this will probably slowly improve, but the play field created already so far is way to large to cover it in one or two LabVIEW upgrades alone. If it doesn't work in 8.5 then that is most likely just the way it is and there is no way around that but to upgrade (or not use that feature). Once you start working with scripting more you will quickly notice that you run into methods and properties that are there but either do nothing or return an error such as "Not implemented" until you upgrade to a newer version. And sometimes it simply crashes until you upgrade. Not much we can do. If LabVIEW would be open source we could take those fixes and backport them to earlier versions, but I will probably be retired when LabVIEW gets open source, if ever.
  21. Have you read the actual document that describes the flatten format of LabVIEW data? For the fundamental datatypes like skalars and structs it can't get much more standard than the default C data format. The only LabVIEW specifics are the prepended string and array sizes, and the structure alignment of 1 bytes, as well as the default big endian byte order. It only gets LabVIEW specific when you talk about the aforementioned array sizes that get prepended, complex numbers and extended precision datatype, and other LabVIEW specific datatypes such as timestamps, refnums, etc. As to Open Source there exists an implementation although not in C but in LabVIEW. Checkout the OpenG lvdata Toolkit. Feel free to translate that to a C library or any other language of your choice .
  22. I'm sure you are not working with Windows anymore, since Windows changes the way of working and data formats with every new version. Not sure you will find another solution though without at least some of that problem too . As to flattened dataformat changes, lets see, one incompatible change at version 4.0 since LabVIEW was introduced as multiplatform version in 1992. (Version 4.0 was around 1995.) Sounds at least to me like a VERY stable dataformat.
  23. Looks almost like an implicit (and definitely not legal) I8 cast is done somewhere. This is something NI really will want to investigate, since such errors are a terror for any LabVIEW user.
  24. Weird reasoning. The LabVIEW variant is not the same as an ActiveX Variant. LabVIEW does wrap ActiveX variants in its own variant to make them compatible diagram wise, but the actual internal implementation of LabVIEW variants is NOT an OLE/ActiveX Variant. And the same applies to the flattened format which is just as proprietary as the other flatten formats, although fairly well documented, except for the Variant. NI usually does a good job in maintaining compatibility with documented behavior but reserves the right to change any non-documented detail at any time. In case of the flatten format they changed the typedef description internally with LabVIEW 8.0 and in the mean time even documented that new format but maintained a compatibility option to return the old flatten typedef. The actual flattened data format stayed the same, except of course was extended to support new datatypes (I/U64, Timestamps, etc.). The only real change in flattened data itself was in LabVIEW 4 when they changed the Boolean to be an 8 bit value instead of a 16 bit value (and boolean arrays changed to be arrays of 8 bit integers, whereas before they were packed). Other changes in the flatten data format were in various versions in how refnums got flattened but NI does not document most refnums internals and therefore it's internal implementation is private and can not be relied upon. But if you know where refnums are you can usually skip them in the datastream without version dependency (almost ). And claiming ActiveX Variants are a standard is also a bit far reaching. It's a Windows only implementation and many other platforms don't even have such a beast.
  25. I left it open on purpose. It can be quite a decision
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.