-
Posts
3,892 -
Joined
-
Last visited
-
Days Won
267
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
In addition to what Shaun said, there are several potential problems in the current OpenG ZIP code in respect to localized character sets. If you use filenames that use characters outside of the 7 bit ASCI code table the result will be very platform dependent. Currently the OpenG ZIP library simply takes the names as handled by LabVIEW which is whatever MBCS the platform uses at that moment. This has many implications. The ZIP standard only supports local encoding or UTF8, and a flag in the archive entry says what it is. This is currently not handled at all in OpenG ZIP. Even if it was there are rather nasty issues that are not trivial to work out. For one if you run the library on a platform that uses UTF8 encoding by default (modern Linux and MacOSX versions) the pathnames in an archive created on that computer will in fact be UTF8 (since LabVIEW is using the platform MBCS encoding) but the flag saying so is not set so it will go wrong when you move that archive to a different platform. On the other hand on Windows LabVIEW is using the CP_ANSI codepage for all its string encoding since that is what Windows GUI apps are supposed to use (unless you make it a full Unicode application which is a beast of burden on its own even for normal GUI apps and an almost impossible thing to move to in a programming environment like LabVIEW if you do not want to throw out compatibility with already created LabVIEW VIs). CP_ANSI is an alias for the codepage set in your control panels depending on your country settings. pkzip (and all other command line ZIP utilities) traditionally use the CP_OEM codepage, This is an alias for another codepage depending on your country settings. It contains mostly the same language specific characters in the upper half of the codepage than what CP_ANSI does but in a considerably different order. It traditionally seems to come from the IBM DOS times, and for some reasons MS decided to go for once for an official standard for Windows rather than the standard set by IBM. So an archive created on Windows with OpenG ZIP will currently use the CP_ANSI codepage for the language specific characters and therefore come up with very strange filenames when you look at it in a standard ZIP utility. The solution as I have been working on in the past months is something along these lines: On all platforms when adding a file to the archive: - Detect if a path name uses characters outside the 7bit ASCI table. If not just store it as is with the UTF8 flag cleared. - If it contains characters outside the 7bit ASCI range do following: On non Windows and MacOSX systems: - Detect if we are on UTF8 system, if not convert path to UTF8, in all cases set UTF8 flag in archive entry and store it On Windows and MacOSX: - Detect if we are on UTF8 (likely not), if so just set UTF8 flag and store file - otherwise convert from CP_ANSI to CP_OEM and in case of successful conversion store file with this name without UTF8 flag - in case the conversion fails for some reasons, store as UTF8 anyhow When reading, there is not very much we can do other than observing the UTF flag in the archive entry. On Non-Windows systems if the flag is different than the current platform setting we have a real problem. codepage translation under unix is basically impossible without pulling in external libraries like icu. Although their existence is fairly standard nowadays there exist a lot of differences in Linux distributions. Making OpenG ZIP depend on them is going to be a big problem. On VxWorks it is not even an option without porting such a library too. On Windows we can use MultiByteToUnicode and vice versa to do the right thing. On MacOSX we have a similar API that "tries" to do mostly the same as the Windows functions but I'm 100% positive that there will be differences for certain character sets. There still is a big problem since the ZIP standard in fact does only allow for the flag if the names are in UTF8 or not. If they are not, there is no information anywhere as to what actual codepage it is in. Remember CP_OEM is simply an alias that maps to a codepage which depends on your language settings. It is a very different codepage for Western European or Eastern European country settings and even more different than for Asian country settings.
-
Well dynamic registration, unless you forbid a once registered reader to unregister, makes everything quite a bit complexer. Since then you can get holes in the index array that would then block the writer at some point or you have to do an additional intermediate refnum/index translater that translates the static refnum index that a reader gets when registering into a correct index into the potentially changing index array. I'm not sure this is worth the hassle as it may as well destroy any time benefits you have achieved with the other ingenious code.
-
Well if you say its function is not interesting, I'm more than happy to believe you. But!!! You didn't comment on the fact if the LvVariant is byte packed, in which case access to the pointers in there would incur quite a performance penalty on at least all non-x86 platforms, or if the structure uses natural alignement, in which case your calculation formula about the size would be in fact misleading. Note: Seems the structure uses at least on Windows x86 the standard LabVIEW byte alignment, that is byte packed. All other platforms including Windows x64 likely will have natural/default alignment. But your documentation is definitely not completely correct. The LvVariant looks more like #pragma pack(1) typedef struct { void *something; // maybe there is still a lpvtbl somehow char flag; // bool void *mData; void *mAttr; int32 refCount; int32 transaction; void *mTD; } LvVariant; #pragma pack() This is for Windows x86. For others I assume the pragma pack() would have to be left away. Also I checked in LabVIEW 8.6 and 2011 and they both seem to have the same layout so I think there is some good hope that the layout stays consistent across versions, which still makes this a tricky business to rely upon.
-
That may be an interesting approach, but there is no guarantee that the generated C code is binary compatible with what LabVIEW uses itself internally. They are entirely different environments and the CPP generated code is in fact free to use different and simpler datatypes for managing LabVIEW data, than what LabVIEW does use. The CPP generated code only has to provide enough functionality for proper runtime execution while the LabVIEW environment has more requirements for editing operations.
-
That's very nice, but don't hold your breath for it! I'm currently overloaded with more important things.
-
It probably is, or some of the other methods that start with LvVariant.. But without some header file that documents at least the parameter list this is a painful process to figure out
-
Interesting info. The pointer size was already mentioned by Shaun, but I have another question or two. Why the initial 1 byte in the class size? Wouldn't that either cause extra alignment in the structure or otherwise unaligned pointer access, which as far as I know is rather inefficient on non x86 systems? Also would the data and typedescriptor by any change be the ILVDataInterface and ILVDatatypeInterface objects documented in the LabVIEW cintools directory? And does the absence of these header files in preLV2009 mean that the actual handling of data was done differently before, or was that already implemented in earlier LabVIEW versions but just not documented in any way? I suppose he means that the handle have to be swapped between your copy of the data and an empty variant with the same type descriptor. Otherwise LabVIEW still sees the data on the return of the CLN and since that wire is not going anywhere, it concludes that the variant refcount can be decremented resulting in a deallocation of the data if their refcount reaches zero so that your copy of the data pointer is then invalid. I'm still not sure this buys us anything in terms of your MoveBlock() (or SwapBlock()) attempt since the variant obviously needs to be constructed somewhere. I suppose that if you store the variant pointer anywhere you would at least have to increment it's refcount too, and if your store the internal data pointer, you would have to increments its own refcount. That would probably mean calling the AddRef() method on the ILVDataInterface pointer inside the variant, if it is an ILFDataInterface pointer and not just the native data only. And the SwapBlock() has the same parameter types as MoveBlock(). And it is not officially documented. What I know is from accidentally stumbling across some assembly statements during debugging.
-
Engine In-cylinder Pressure w/ cRio
Rolf Kalbermatter replied to mcniko311's topic in LabVIEW General
Well, as far as the LabVIEW code is concerned it is just an anolog read possibly with the use of a FIFO together with a digital read for the encoder. So you will have to find what cRIO module can do the task, based on your requirements for voltage or current range of your sensor and the sample speed you need to get your data with. Then, based on the module type, you look for LabVIEW example VIs that come with the FPGA toolkit. Study them, try to make small changes to them to understand how they work and go from there. -
Well the array has to be locked somehow yes. Even in C you would have to do something like this, unless you can limit the possible accesses significantly in terms of who can write or read to a single element. If that can be done then you could get away with referenced access to the array elements with maybe a cas() mechanism to make really sure. But that can have also significant performance losses. The x86/64 cmpxchg is fairly optimized and ideal for this, but the PPC has a potential for performance loss as the syncing is done through storing the address of the protected memory location into a special register. If someone else does want to protect another address, his attempt will overwrite that register and you loose the reservation and will see this after the compare operation and have to start all over again by reserving the address again. The potential to loose this reservation is fairly small as there are only about two assembly instructions between setting the reservation and checking that it is still valid but it does exist nevertheless. The advantage of the PPC implementation is that the reservation does not lock out bustraffic at all, unless it tries to access the reserved address, while the x86 implementation locks out any memory access for the duration of the cmpxchg operation.
-
Ahh I think you got that wrong. Ever used the inplace structure node? There you have an inplace replace array element without having to read and write the entire array. PPC doesn't have cas directly, but it has synced loads and stores and with a little assembly a cas() can be fairly easily implemented as I have found out in the meantime. As to SwapBlock() I can't remember the details but believe it is more or less an inplace swapping of the memory buffer contents but without any locking of any sort. As such it does more than MoveBlock() which only copies from one buffer to the other, but not fundamentally more. That API comes from the LabVIEW stoneage, when concurrent access was not an issue since all the multithreading in LabVIEW was handled through its own cooperative multithreading execution system so there was in fact no chance that two competing LabVIEW modules could attempt to work on the same memory location without the LabVIEW programmer knowing it if he cared. With preemptive multitasking you can never have this guarantee, as your SwapBlock() call could be preempted anywhere in its executation. One thing about SwapBlock could be interesting, as it does operate on buffers as 4 byte integers if both buffers are 4 byte aligned and the length to operate on is a multiple of 4 bytes.
-
Well lets forget about the polymorphic data storage aspect for a moment then. That is IMHO a completely separate issue. What you therefore want is a managing infrastructure for the indices and all for both reader and writer but in a way that they are not globally locked but only locally protected. What I didn't like in your example (the first I think, didn't look at the others if there are) was (besides the use of globals of course ) that the implementation of the reader index does absolutely not scale. There is no way to add an additional reader without complete modification of just about anything in there. So I think the first approach would be to have the reader index as an array, and hence why I came up with the reader registration. Now you are right that globals have some sort of protection but only immediate access, you can not have a protected read/modify/store without additional external synchronization means. The question is, do we need protected read/modify/store at all? I think not, as there is only really one writer for every variable and the variables being integers have in itself atomic read access guaranteed on all modern platforms. So what about making the reader index an array? I think that should work. And if you separate the index management from the actual buffer data storage somehow I think that the delays caused by a non reentrant call to the index manager entity (be it an FGV or a LVOOP private data) should not cost to much performance. If protected read/modify/store would be required, maybe the inplace structure node might help. It seems to do some sort of locking that makes sure that the data can not be in an intermediate state. If that fails the only solution I would see in order to avoid explicit semaphores or mutexes would be the use of some external code to access some cmpexch() like function. Incidentally I have been struggling with this type of thing just now for some external code (LuaVIEW for those wanting to know) where I needed to be able to update internal flags without running into a potential race if some other code tries to update another flag in the same data structure. The solution turned out to be the cmpexch() or similar function which is a function that atomically compares the state of a value with an expected value and only updates the value with the new value if that compare is positive. So to set a bit in a word I could then do something like: long atomic_fetch_and_or(long *value, long mask){ long old; do { old = *value; } while (!cmpexch(value, old, old | mask)); return old;} In C this is fairly simple since the cmpexch() (or other names) is a standard OS function nowadays (or usually a compiler intrinsic) but there are exceptions in LabVIEW land such as the older Pharlap based RT targets and also the VxWorks based ones it seems. At least I couldn't find a reliable cmpexch() or similar function so far in the VxWorks headers and Pharlap ETS before LabVIEW 8.5 or so did not have a kernel32.InterlockedCompareExchange This are so called lockfree mechanismes although personally I find that a little misleading since it is actually still locking in the cmpexch() as that normally implements a complete bus lock for the duration of the assembly sequence, to prevent state corruption even when other CPU cores might want to access the same address at that moment. There are variations possible on the type of lock held such as only for write operations or only on read, but they make the whole story even more complex, are rather unportable between different architectures because of differences in their semantic so that I don't think it makes much sense to bother about them for more general purpose use.
-
Personally I think DVRs are probably the way to go here. They have reference semantics so you won't have to copy all the data each time eventhough I 'm pretty sure the DVR overhead will be negative in case of scalar buffer elements but positive for more complex datatypes. The only external code solution I could think of would be the use of the ILVDataInterface but I'm now pretty sure that is exactly what the DVR actually is internally about and I do not see any benefit in moving that into external code as in both cases you would have to use polymorphic VIs to support multiple datatypes. About the writer needing to know the index of the readers this would seem most easily solved by having all readers subscribe themselves to the buffer and getting back an refnum (index) into an array of indices where the buffer stores the reader index. Each time the reader wants to retrieve its data it then has to hand its refnum and the buffer retrieves the data for the reader and updates the according index. Personally I would use a functional global for the buffer management including the reader indices, but doing it with LVOOP would allow easy instantiation so you can have multiple circular buffers in an application without having to resort to making the FGV itself indexable too.
-
Engine In-cylinder Pressure w/ cRio
Rolf Kalbermatter replied to mcniko311's topic in LabVIEW General
Since the original post dates from 2009 and the poster since removed his project from his MySpace. -
And also only Windows 32 Bit, AFAIK. A more and more important distinction. I thought LabVIEW for Windows 64 Bit uses 8 Byte alignment which is the default for Visual C. So there might be a documentation CAR in order. One thing to mention though: I believe Flatten and Unflatten will go to great lengths to make sure that the resulting flattened byte stream will be compatible across platforms. This includes things like using Big Endian as default byte order, using 128 bit for extended precision numbers on all platforms (even-though most LabVIEW platforms nowadays use the 80 bit extended floating point format of the x86 CPU architecture internally, and a few simply use the double format to make things trivial to port LabVIEW to it) 1), and using a byte alignment of 1 for all elements. And since the binary File read and write supposedly use Flatten and Unflatten internally too, they should also be safe to use for all applications and should be fully platform independent. If they are not then that would be really a bug! The byte alignment as discussed here only comes into play when you pass LabVIEW native data to external code like shared libraries and CINs. Here the data is aligned with the LabVIEW default alignment for that platform. 1) In fact the only platform ever making use of true 128 bit extended floating point numbers was the Sparc platform. But that was not based on a CPU extended floating point format but a Sun software library implementing an extended floating point arithmetics. It was as such quite slow and died completely when the Solaris version of LabVIEW was discontinued. Nowadays extended floating format is really an 80 bit format on most platforms, (and on VxWorks platforms it is internally really just a double format).
-
Well I"m not sure about multithreading issues with respect to .Net but .Net is more flexible for sure than ActiveX, which was build on OLE with all its pre-Win32 legacy issues. So it does seem reasonable that the .Net component is by default instantiated and run as a separate thread and therefore satisfies the Apartment threading requirements of the ActiveX component, without being locked into the LabVIEW UI thread. How much the .Net component would have to do, to actually make sure it is working like that, I would not know. It is entirely possible that the .Net component has to manage the entire threading explicitly for the ActiveX component.Since the ActiveX component doesn't really count as fully managed service in terms of .Net there will also have to be some explicit managed-unmanaged translation code in the .Net component. Fortunately if you use tlbimp/aximp to create the ActiveX wrapper you should be safe from having to bother about these issues.
-
I'm almost 100% sure it is not documented. And from the looks of it it won't really help you here. The function linked to by the xnode is the only useful exported function in there. If the xnode doesn't know how to deal with variants to make the xnode interface work with that library function, the library function itself most likely doesn't know either. But can you explain what you try to do? Do you really need the runtime variant type feature of a Variant, or do you just want adaptable code that is fine to get the right datatype at edit time? If the second is true, some polymorphic VIs and possibly the ILVDataInterface in combination with Adapt To Type as Call Library Node parameter type might be enough. Just be aware that ILVDataInterface is only available since LabVIEW 2009.
-
Well, that made me think! It might be not the LvVariant that is refcounted but the data inside. Take a look at ILVDataInterface.h. I think all the diagram data is probably wrapped in that and refcounted and then the LvVariant is a thin wrapper around that. To manage polymorphic data. The ILVData interfaces in there inherit from IUnknown, most likely the same as the COM IUnknown interface, eventhough NI seems to have gone the path of creating a nidl tool that replaces the midl compiler from MS in order to make it work for non MS-platforms. Quite an undertaking but I would certainly start with the source code from Wine for that, who made a widl tool! While Wine is LGPL, as long as it is only for inhouse use, and I doubt NI ever planned to release that toolchain, this would be basically still be fine.
-
We need a new password cracker :-(
Rolf Kalbermatter replied to Sparkette's topic in LabVIEW General
showing off, he? -
Well, refcounting is for ease of resource management. You simply follow some well defined rules about when to call AddRef() and Release() and then the last Release() automatically deallocates the object. It controls the lifetime of the object, not of the data in the object. Flattening a Variant basically AddRefs() the object, retrieves the data and stores it in a flattened stream and then calls Release() on the object. If nobody else still has AddRefed() the object, it will now be deallocated. Unflattening does a new() for a Variant object which creates the object with one refcount, then AddRef() and then unflatten and store the unflattened data in the object, then Release() again. Works perfect for COM and why not for LabVIEW Variants? The master pointer is an old concept taken from the Mac OS 7 memory manager. Basically the memory manager maintained an array of master pointers from which handles got allocated. The entry in the master pointer array was the first pointer in the handle, which pointed to the actual memory area. This allowed the memory manager to maintain a preallocated pool of master pointers and swap allocated memory areas between an "allocated" master pointer pool and an "previously allocated" pool when a handle was freed, in order to safe potentially some memory allocations, but also meant that that pool could be exhausted and then you had to call a special function MoreMasterPointers() or similar to extend the pool. The current handle implementation in LabVIEW still inherits some characteristics from this, but as far as I know a handle is not anymore allocated from a master pointer array but instead is just a pointer to a pointer to a memory area. The memory area does actually have a hidden header in front of the handle that stores information about the size of the handle, some flags and at least in some builds of LabVIEW a back pointer to the pointer that points to the memory area. But most of this information is not anymore actively used (except potentially in special internal debug builds to more easily track down memory corruptions).
-
I don't think so. Handles do not really maintain a ref count in itself. And I doubt a LvVariant is a real handle although it seems to be a pointer to a pointer (or more precisely a pointer to a C++ class). I would suspect the LvVariant to implement some COM like refcounting but would not know how to get the refcount properly increased and decreased.
-
That is most likely not a real fix but just an avoidance of a symptome. It looks like LabVIEW is actually maintining some form of reference count on Variants. Copying the pointer alone is likely not enough to keep the variant alive, but somehow the reference count has to be incremented too (and properly decremented afterwards again, to avoid lingering Variants.
-
And why do you think a variant is 8 bytes long? I don't know how long it is, but it is either a pointer or a more complex structure whose size you cannot easily determine. In the pointer case, which I would tend to believe it is, it would be either 4 bytes or 8 bytes long depending on if you run this on 32 bit or 64 bit. The pointer theory is further enforced by the lonely definition of the LvVariant typedef in the extcode.h file. It is certainly an object so will hold a virtual dispatch table (a pointer, yes) and the actual data itself in whatever format LabVIEW would like. Most likely they chose a similar approach to the Windows VARIANT layout with a variable telling the type of the data and the actual data either as pointer for non-scalar data and directly embedded for scalars. If you run your VI on LabVIEW for 32 bit, the second MoveBlock will overwrite data in memory beyond the Variantpointer and therefore destroy something in memory. Please note also that LabVIEW knows in fact two variants. One is the native LvVariant and the other is the Windows VARIANT. They look the same on the diagram and LabVIEW will happily coerce from one to the other but in memory they are different. And while you can configure a CLN parameter to be a Windows VARIANT, this is only supported on Windows obviously. Still wish they would document the LvVariant API that is exported by LabVIEW.exe.
-
And that is not necessarily LabVIEWs fault. Most ActiveX controls/servers are not designed to be multithreading safe, declaring themselves to be apartment threaded, meaning any specific caller has to call them always from the same thread. LabVIEW being itself highly dynamic threading can only guarantee this by executing them from the only special thread it knows about, and that is the UI thread. Theoretically LabVIEW could of course extend its already very complex threading model by providing a dynamic number of fixed threads (nice contradiction in here ) one for each apartment threaded ActiveX control the user wants to run, but that would make the whole threading in LabVIEW mainly even more complex and more likely to fail under heavy conditions, so it would buy little in the end. The best solution would be to require the ActiveX component developer to provide a free threading compatible component and register it as such in Windows. Then LabVIEW can call it from any of its threads and does not have to push it into the UI thread. But as already pointed out, free threading capable ActiveX components are a very rare species, since they are not exactly trivial to develop and the apartement threading works in the majority of use cases good enough.
-
calling Labview DLL from VB.NET
Rolf Kalbermatter replied to Alex723's topic in Calling External Code
Attachment to post on the NI site, as the attachment upload seems borked there. VB VISA.zip -
labpython Problem with labpython and numpy
Rolf Kalbermatter replied to Gombo's topic in OpenG General Discussions
The NIPrivatePtr() macro definitely is a LabVIEW 2012 invention. That didn't exist before. In LabVIEW 8.5 until 2011 this was called LV_PRIVATE_POINTER() and before that PrivatP(). So as you can see the bad guys here are the NI developers, devilish changing macro definitions over the course of decades. The change from PrivateP() to LV_PRIVATE_POINTER() is most likely due to supporting a development environment (like maybe a new MacOSX SDK) that introduced exactly this macro for its own use and then NI has to change its own macros to prevent a name collision. Them forcing parties like Apple or Microsoft to change their macro definitions because of having longer rights on the naming is not an option. The last change looks a bit bogus. Most likely the macro in 8.5 was added by a Microsoft inspired hungarian notion loving guy or gal in the LabVIEW team, while someone in 2012 decided that this notion is evil and changed it to the more Apple like Upper/Lowercase notion, that is used throughout the LabVIEW source in most places (since LabVIEW was originally written on the Mac and then ported to other platforms later). Changing LabPython such that it can compile with all the different LabVIEW cintools header versions is a nice exercise in preprocessor macro magic, and one I honestly do not feel any inclinations to do at this moment. If (and the emphasis is here on IF) I ever would need to recompile LabPython for any reason, I would still need the old definitions since I generally use cintools headers that are based mostly on the LabVIEW 7.1 definitions, with a few minor edits to support 64 bit compilation.