Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. Shaun, in theory you are right. In practice is a LabVIEW DLL a C wrapper for each function that invokes the according pre-compiled VI inside the DLL. As such there needs to be some runtime support to load and executed these VIs. This is usually happening inside the according LabVIEW runtime which is launched from the wrapper. Some kind of Münchhausen trick really. However at least in earlier versions of LabVIEW if the platform and LabVIEW version of the compiled DLL was the same as the calling process, then the wrapper invoked the VIs inside the DLL directly in the context of the calling LabVIEW process.
  2. Seems it is again the time to clean out the blog spam.
  3. There is no easy answer to this. As with most things the right answer is: it depends! If your LabVIEW DLL was created with a different version than the LabVIEW version you are running your lvlib in, you are safe. The DLL will be executed in the context of the runtime version of LabVIEW that corresponds with the LabVIEW version used to create the DLL. Your LabVIEW lib is executing directly in the calling LabVIEW system, so they are as isolized from each other as you can get on the same machine instance. However if you load the DLL into the same LabVIEW development system version as was used to create it, things get more interesting. In that case LabVIEW loads the VIs inside the DLL into the same LabVIEW system to save some performance. Loading the DLL into a different LabVIEW runtime requires marshaling of all function parameters across process boundaries, since the runtime system is a different process than your LabVIEW system, which is quite costly. Short circuiting this saves a lot of overhead. But if the VIs in the DLL are not in the same version as the current LabVIEW version, this can not be done as the DLL VIs are normally stored without diagram and can therefore not be recompiled for the current LabVIEW platform. So in this case things get a bit more complicated. I haven't tested so far if VIs inside DLLs would get loaded into a special application context in that case. It would be the best way to guarantee as much of similar behavior as if the DLL had to be loaded into a separate runtime. But it may also involve special difficulties that I'm not aware of.
  4. This does not sound like any LabPython specific issue but a simple basic Python problem. Please refer to a Python discussion forum for such questions. They can be of a lot more assistance to you than I could. When creating LabPython about 15 years ago I knew Python much more from the embedding API than anything else and was just proficient enough in Python itself to write some simple scripts to test LabPython. Haven't used Python in any form and flavor since.
  5. Are you sure NI-IMAQ contains the barcode functions? I thought NI-IMAQ only contains the functions that are directly necessary for getting image data into the computer. The actual processing of images is then done with NI Vision Development Module. And to heng1991, this software may seem expensive but once you have exercised your patience with trying to get external libraries not crash when trying to create an interface in LabVIEW for them, you will very likely agree that this price has a reason. Especially since unless you are very experienced with interfacing to external libraries you are very likely to create a VI interface that may seem to work but will in fact corrupt data silently in real world applications.
  6. DSCheckPtr() is generally a bad idea for several reasons: For one it gives you a false security since there are situations where this check would simply have to conclude that the pointer is valid but it still could be not valid in the context you make the check. Such a function can check a few basic attributes of a pointer such as if the pointer is not NULL, a real pointer already allocated in the heap rather than just an address to some memory location but it can not check if this pointer was allocated by the original context in which you make the check or since been freed and reallocated by someone else. And anything but the trivial NULL pointer check will cost significant performance as the function has to walk the allocated heap pointers to find if it exists at all in there. Windows knows also such a function, which only works if the memory was allocated through the HeapAlloc() function but its performance is notorical and its false security too. Use of this function is a clear indication that someone tried to patch up a badly designed library by adding some extra pseudo security. As to atomic operations in the exported C API of LabVIEW, I'm not really aware of any but haven't checked in 2012 or 2013 if there are new exports available that might sound like atomic cmpxchg(). Even if there were, I find releasing a library that would not support at least 3 versions of LabVIEW not really a good idea. On the other hand with some preprocessor magic it would be not to difficult to create a source code file that resorts to compiler intrinsics where available (MSVC >= 2005 and GCC >= 4.1.4) and implements the according inline assembly instructions for the others (VxWorks 6.1 and 6.3 and MSVC 6 for Pharlap ETS). I could even provide my partly tested version of a header file for this. And if you want to be safe you should avoid using an U8 as lock. SwapBlock() not being atomic as far as I know, has no way to guarantee that another concurrent call to it on an address adjunct to the currently swapped byte would not destroy the just swapped byte, since the CPU generally works on 32 bit addresses. Also avoid the temptation to make any data structure you want to access in such a way packed in memory. Only aligned address accesses to memory will generally be safe from being stomped on by another thread trying to access a memory address directly adjunct to this address. If you can use 32bit locks and assure the 32 bit element is properly aligned in memory, SwapBlock() won't need to be atomic as long as you can guarantee that no concurrent read/modify/write (SwapBlock()) access to the same address will ever happen.
  7. Well the vxWorks based controllers are a bit of a strange animal in the flock. VxWorks uses a lot of unix and posix like functionality but also has quite a bit of deviations from this. I'm not really sure if the Windows like file system is part of this at all, or if the drive letter nomenclature is in fact an addition by NI to make them behave more like the Pharlap controllers. Personally I find it strange that they use drive letters at all, as the unix style flat file hierarchy makes a lot more sense. But it is how it is and I'm in fact surprised that the case sensitivity does not apply to the whole filename. But maybe that is a VxWorks kernel configuration item too, that NI disabled for the sake of easier integration with existing Pharlap ETS tools for their Pharlap based controllers. VxWorks only was used because Pharlap did not support PPC compilation and at that time x86 based CPUs for embedded applications were rather non-existent, whereas PPC was more or less dominating the entire high end embedded market from printers to routers and more. The use of PPCs for Mac computers was a nice marketing fact but really didn't mount up to any big numbers in comparison to the embedded applications of that CPU.
  8. While I'm in the club of trying to avoid crashing whenever possible I find a catch all exception handler that simply catches and throws away exceptions an even less acceptable solution. An exception handler is ok for normal errors where you can have enough information from the exception cause itself to do something about such as retrying a failed operation or such. But it is ABSOLUTELY and DEFINITELY unacceptable for exceptions like invalid pointer accesses. If they happen I want to know about them as soon as possible and have done as little as possible afterwards. As such I find the option in the CLN to actually just continue after such exceptions highly dangerous. There are many people out there who believe that writing less than perfect external code is just fine and just let LabVIEW catch the exception and happily go on. An invalid pointer access or any other error that is caused by this later on (writing beyond a buffer often doesn't cause an immediate error since the memory location is already used by something else in the app and as such completely valid as far as the CPU and OS is concerned) needs to stop the program immediately, finito! There is no excuse for trying to continue anyways. Blame on whoever wrote that crashing code but you do not want LabVIEW to eat your harddrive and what else! If you talk about bits in any form of integer then write access to them is not atomical on any CPU architecture I know off. Even bytes and shorts are highly suspicious even on x86 architecture, since the memory transfer traditionally always happend in 32 bit quantities (and nowadays even in 64 or even 128 bit quantities). Some reading material I consulted suggests that write access to anything but aligned integers (and on 64 bit architectures aligned 64 bit integers) is not guaranteed to be atomical on just about any CPU architecture out there. I can't be sure, and am in fact to lazy to make a long research project about this, so I employ in all my C code cmpxchg() operations when wrriting bits and bytes to structure elements that are not integer aligned or not guaranteed to not share bytes inside the aligned integer adres with other variables (unless of course I can proof positively that there will never be anyone else trying to write to the same integer in any form and in C that means that the routine writing to that address also has to be at least protected or single threaded).
  9. Sigh! I mentioned it is not correct for overlapping buffers. And the Microsoft C Runtime source code is copyright protected so you should not post it anywhere!
  10. That is true, but what do you want to say with that? This is in general what MoveBlock() and memove() is about. Nothing magical at all! void MoveBlock(void *src, void *dst, int32 len) { int32 i; if (src & 0x3 || dst & 0x3 || len & 0x3) { for (i = 0; i < len; i++) *(char*)dst++ = *(char*)len++; } else { for (i = 0; i < len / 4; i++) *(int32*)dst++ = *(int32*)len++; } } This is not the real MoveBlock() implementation. This implementation would cause "undefined" behaviour if the two memory blocks overlap, while the MoveBlock() documentation states explicitedly that overlapping memory blocks are allowed. Basically the real implementation would have to compare the pointers and either start from the begin or end with copying depending on the comparison result. That does not change anything about the fact that a pointer is just a meaningless collections of numbers if it does point to something that is not allocated in memory anymore. Once your CLN returns in your previous example, there is absolutely nothing that would prevent LabVIEW from deallocating the variant, except lazy deallocation, because it determines that the same variant can be reused the next time the VI is executed. But there is no guarantee that LabVIEW will do lazy deallocation and in real world scenarios it is very likely that LabVIEW will deallocate that variant sooner or later to reuse the memory for something else.
  11. That is not going to help much. That pointer is invalid at the moment the Call Library Node returns since the variant got deallocated (or at least marked for deallocation at whatever time LabVIEW feels like). And the third parameter is always an int32 but you need to pass 4 for 32 Bit LabVIEW and 8 for 64 byte LabVIEW to it.
  12. In addition to what Shaun said, there are several potential problems in the current OpenG ZIP code in respect to localized character sets. If you use filenames that use characters outside of the 7 bit ASCI code table the result will be very platform dependent. Currently the OpenG ZIP library simply takes the names as handled by LabVIEW which is whatever MBCS the platform uses at that moment. This has many implications. The ZIP standard only supports local encoding or UTF8, and a flag in the archive entry says what it is. This is currently not handled at all in OpenG ZIP. Even if it was there are rather nasty issues that are not trivial to work out. For one if you run the library on a platform that uses UTF8 encoding by default (modern Linux and MacOSX versions) the pathnames in an archive created on that computer will in fact be UTF8 (since LabVIEW is using the platform MBCS encoding) but the flag saying so is not set so it will go wrong when you move that archive to a different platform. On the other hand on Windows LabVIEW is using the CP_ANSI codepage for all its string encoding since that is what Windows GUI apps are supposed to use (unless you make it a full Unicode application which is a beast of burden on its own even for normal GUI apps and an almost impossible thing to move to in a programming environment like LabVIEW if you do not want to throw out compatibility with already created LabVIEW VIs). CP_ANSI is an alias for the codepage set in your control panels depending on your country settings. pkzip (and all other command line ZIP utilities) traditionally use the CP_OEM codepage, This is an alias for another codepage depending on your country settings. It contains mostly the same language specific characters in the upper half of the codepage than what CP_ANSI does but in a considerably different order. It traditionally seems to come from the IBM DOS times, and for some reasons MS decided to go for once for an official standard for Windows rather than the standard set by IBM. So an archive created on Windows with OpenG ZIP will currently use the CP_ANSI codepage for the language specific characters and therefore come up with very strange filenames when you look at it in a standard ZIP utility. The solution as I have been working on in the past months is something along these lines: On all platforms when adding a file to the archive: - Detect if a path name uses characters outside the 7bit ASCI table. If not just store it as is with the UTF8 flag cleared. - If it contains characters outside the 7bit ASCI range do following: On non Windows and MacOSX systems: - Detect if we are on UTF8 system, if not convert path to UTF8, in all cases set UTF8 flag in archive entry and store it On Windows and MacOSX: - Detect if we are on UTF8 (likely not), if so just set UTF8 flag and store file - otherwise convert from CP_ANSI to CP_OEM and in case of successful conversion store file with this name without UTF8 flag - in case the conversion fails for some reasons, store as UTF8 anyhow When reading, there is not very much we can do other than observing the UTF flag in the archive entry. On Non-Windows systems if the flag is different than the current platform setting we have a real problem. codepage translation under unix is basically impossible without pulling in external libraries like icu. Although their existence is fairly standard nowadays there exist a lot of differences in Linux distributions. Making OpenG ZIP depend on them is going to be a big problem. On VxWorks it is not even an option without porting such a library too. On Windows we can use MultiByteToUnicode and vice versa to do the right thing. On MacOSX we have a similar API that "tries" to do mostly the same as the Windows functions but I'm 100% positive that there will be differences for certain character sets. There still is a big problem since the ZIP standard in fact does only allow for the flag if the names are in UTF8 or not. If they are not, there is no information anywhere as to what actual codepage it is in. Remember CP_OEM is simply an alias that maps to a codepage which depends on your language settings. It is a very different codepage for Western European or Eastern European country settings and even more different than for Asian country settings.
  13. Well dynamic registration, unless you forbid a once registered reader to unregister, makes everything quite a bit complexer. Since then you can get holes in the index array that would then block the writer at some point or you have to do an additional intermediate refnum/index translater that translates the static refnum index that a reader gets when registering into a correct index into the potentially changing index array. I'm not sure this is worth the hassle as it may as well destroy any time benefits you have achieved with the other ingenious code.
  14. Well if you say its function is not interesting, I'm more than happy to believe you. But!!! You didn't comment on the fact if the LvVariant is byte packed, in which case access to the pointers in there would incur quite a performance penalty on at least all non-x86 platforms, or if the structure uses natural alignement, in which case your calculation formula about the size would be in fact misleading. Note: Seems the structure uses at least on Windows x86 the standard LabVIEW byte alignment, that is byte packed. All other platforms including Windows x64 likely will have natural/default alignment. But your documentation is definitely not completely correct. The LvVariant looks more like #pragma pack(1) typedef struct { void *something; // maybe there is still a lpvtbl somehow char flag; // bool void *mData; void *mAttr; int32 refCount; int32 transaction; void *mTD; } LvVariant; #pragma pack() This is for Windows x86. For others I assume the pragma pack() would have to be left away. Also I checked in LabVIEW 8.6 and 2011 and they both seem to have the same layout so I think there is some good hope that the layout stays consistent across versions, which still makes this a tricky business to rely upon.
  15. That may be an interesting approach, but there is no guarantee that the generated C code is binary compatible with what LabVIEW uses itself internally. They are entirely different environments and the CPP generated code is in fact free to use different and simpler datatypes for managing LabVIEW data, than what LabVIEW does use. The CPP generated code only has to provide enough functionality for proper runtime execution while the LabVIEW environment has more requirements for editing operations.
  16. That's very nice, but don't hold your breath for it! I'm currently overloaded with more important things.
  17. It probably is, or some of the other methods that start with LvVariant.. But without some header file that documents at least the parameter list this is a painful process to figure out
  18. Interesting info. The pointer size was already mentioned by Shaun, but I have another question or two. Why the initial 1 byte in the class size? Wouldn't that either cause extra alignment in the structure or otherwise unaligned pointer access, which as far as I know is rather inefficient on non x86 systems? Also would the data and typedescriptor by any change be the ILVDataInterface and ILVDatatypeInterface objects documented in the LabVIEW cintools directory? And does the absence of these header files in preLV2009 mean that the actual handling of data was done differently before, or was that already implemented in earlier LabVIEW versions but just not documented in any way? I suppose he means that the handle have to be swapped between your copy of the data and an empty variant with the same type descriptor. Otherwise LabVIEW still sees the data on the return of the CLN and since that wire is not going anywhere, it concludes that the variant refcount can be decremented resulting in a deallocation of the data if their refcount reaches zero so that your copy of the data pointer is then invalid. I'm still not sure this buys us anything in terms of your MoveBlock() (or SwapBlock()) attempt since the variant obviously needs to be constructed somewhere. I suppose that if you store the variant pointer anywhere you would at least have to increment it's refcount too, and if your store the internal data pointer, you would have to increments its own refcount. That would probably mean calling the AddRef() method on the ILVDataInterface pointer inside the variant, if it is an ILFDataInterface pointer and not just the native data only. And the SwapBlock() has the same parameter types as MoveBlock(). And it is not officially documented. What I know is from accidentally stumbling across some assembly statements during debugging.
  19. Well, as far as the LabVIEW code is concerned it is just an anolog read possibly with the use of a FIFO together with a digital read for the encoder. So you will have to find what cRIO module can do the task, based on your requirements for voltage or current range of your sensor and the sample speed you need to get your data with. Then, based on the module type, you look for LabVIEW example VIs that come with the FPGA toolkit. Study them, try to make small changes to them to understand how they work and go from there.
  20. Well the array has to be locked somehow yes. Even in C you would have to do something like this, unless you can limit the possible accesses significantly in terms of who can write or read to a single element. If that can be done then you could get away with referenced access to the array elements with maybe a cas() mechanism to make really sure. But that can have also significant performance losses. The x86/64 cmpxchg is fairly optimized and ideal for this, but the PPC has a potential for performance loss as the syncing is done through storing the address of the protected memory location into a special register. If someone else does want to protect another address, his attempt will overwrite that register and you loose the reservation and will see this after the compare operation and have to start all over again by reserving the address again. The potential to loose this reservation is fairly small as there are only about two assembly instructions between setting the reservation and checking that it is still valid but it does exist nevertheless. The advantage of the PPC implementation is that the reservation does not lock out bustraffic at all, unless it tries to access the reserved address, while the x86 implementation locks out any memory access for the duration of the cmpxchg operation.
  21. Ahh I think you got that wrong. Ever used the inplace structure node? There you have an inplace replace array element without having to read and write the entire array. PPC doesn't have cas directly, but it has synced loads and stores and with a little assembly a cas() can be fairly easily implemented as I have found out in the meantime. As to SwapBlock() I can't remember the details but believe it is more or less an inplace swapping of the memory buffer contents but without any locking of any sort. As such it does more than MoveBlock() which only copies from one buffer to the other, but not fundamentally more. That API comes from the LabVIEW stoneage, when concurrent access was not an issue since all the multithreading in LabVIEW was handled through its own cooperative multithreading execution system so there was in fact no chance that two competing LabVIEW modules could attempt to work on the same memory location without the LabVIEW programmer knowing it if he cared. With preemptive multitasking you can never have this guarantee, as your SwapBlock() call could be preempted anywhere in its executation. One thing about SwapBlock could be interesting, as it does operate on buffers as 4 byte integers if both buffers are 4 byte aligned and the length to operate on is a multiple of 4 bytes.
  22. Well lets forget about the polymorphic data storage aspect for a moment then. That is IMHO a completely separate issue. What you therefore want is a managing infrastructure for the indices and all for both reader and writer but in a way that they are not globally locked but only locally protected. What I didn't like in your example (the first I think, didn't look at the others if there are) was (besides the use of globals of course ) that the implementation of the reader index does absolutely not scale. There is no way to add an additional reader without complete modification of just about anything in there. So I think the first approach would be to have the reader index as an array, and hence why I came up with the reader registration. Now you are right that globals have some sort of protection but only immediate access, you can not have a protected read/modify/store without additional external synchronization means. The question is, do we need protected read/modify/store at all? I think not, as there is only really one writer for every variable and the variables being integers have in itself atomic read access guaranteed on all modern platforms. So what about making the reader index an array? I think that should work. And if you separate the index management from the actual buffer data storage somehow I think that the delays caused by a non reentrant call to the index manager entity (be it an FGV or a LVOOP private data) should not cost to much performance. If protected read/modify/store would be required, maybe the inplace structure node might help. It seems to do some sort of locking that makes sure that the data can not be in an intermediate state. If that fails the only solution I would see in order to avoid explicit semaphores or mutexes would be the use of some external code to access some cmpexch() like function. Incidentally I have been struggling with this type of thing just now for some external code (LuaVIEW for those wanting to know) where I needed to be able to update internal flags without running into a potential race if some other code tries to update another flag in the same data structure. The solution turned out to be the cmpexch() or similar function which is a function that atomically compares the state of a value with an expected value and only updates the value with the new value if that compare is positive. So to set a bit in a word I could then do something like: long atomic_fetch_and_or(long *value, long mask){ long old; do { old = *value; } while (!cmpexch(value, old, old | mask)); return old;} In C this is fairly simple since the cmpexch() (or other names) is a standard OS function nowadays (or usually a compiler intrinsic) but there are exceptions in LabVIEW land such as the older Pharlap based RT targets and also the VxWorks based ones it seems. At least I couldn't find a reliable cmpexch() or similar function so far in the VxWorks headers and Pharlap ETS before LabVIEW 8.5 or so did not have a kernel32.InterlockedCompareExchange This are so called lockfree mechanismes although personally I find that a little misleading since it is actually still locking in the cmpexch() as that normally implements a complete bus lock for the duration of the assembly sequence, to prevent state corruption even when other CPU cores might want to access the same address at that moment. There are variations possible on the type of lock held such as only for write operations or only on read, but they make the whole story even more complex, are rather unportable between different architectures because of differences in their semantic so that I don't think it makes much sense to bother about them for more general purpose use.
  23. Personally I think DVRs are probably the way to go here. They have reference semantics so you won't have to copy all the data each time eventhough I 'm pretty sure the DVR overhead will be negative in case of scalar buffer elements but positive for more complex datatypes. The only external code solution I could think of would be the use of the ILVDataInterface but I'm now pretty sure that is exactly what the DVR actually is internally about and I do not see any benefit in moving that into external code as in both cases you would have to use polymorphic VIs to support multiple datatypes. About the writer needing to know the index of the readers this would seem most easily solved by having all readers subscribe themselves to the buffer and getting back an refnum (index) into an array of indices where the buffer stores the reader index. Each time the reader wants to retrieve its data it then has to hand its refnum and the buffer retrieves the data for the reader and updates the according index. Personally I would use a functional global for the buffer management including the reader indices, but doing it with LVOOP would allow easy instantiation so you can have multiple circular buffers in an application without having to resort to making the FGV itself indexable too.
  24. Since the original post dates from 2009 and the poster since removed his project from his MySpace.
  25. And also only Windows 32 Bit, AFAIK. A more and more important distinction. I thought LabVIEW for Windows 64 Bit uses 8 Byte alignment which is the default for Visual C. So there might be a documentation CAR in order. One thing to mention though: I believe Flatten and Unflatten will go to great lengths to make sure that the resulting flattened byte stream will be compatible across platforms. This includes things like using Big Endian as default byte order, using 128 bit for extended precision numbers on all platforms (even-though most LabVIEW platforms nowadays use the 80 bit extended floating point format of the x86 CPU architecture internally, and a few simply use the double format to make things trivial to port LabVIEW to it) 1), and using a byte alignment of 1 for all elements. And since the binary File read and write supposedly use Flatten and Unflatten internally too, they should also be safe to use for all applications and should be fully platform independent. If they are not then that would be really a bug! The byte alignment as discussed here only comes into play when you pass LabVIEW native data to external code like shared libraries and CINs. Here the data is aligned with the LabVIEW default alignment for that platform. 1) In fact the only platform ever making use of true 128 bit extended floating point numbers was the Sparc platform. But that was not based on a CPU extended floating point format but a Sun software library implementing an extended floating point arithmetics. It was as such quite slow and died completely when the Solaris version of LabVIEW was discontinued. Nowadays extended floating format is really an 80 bit format on most platforms, (and on VxWorks platforms it is internally really just a double format).
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.