-
Posts
3,881 -
Joined
-
Last visited
-
Days Won
265
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
That change was made on April 10, 2011 to the VI in Subversion. Not sure when the latest release of the ZLIB library was created by JGCode. It might have been just before that. On April 11, 2011 an additional change was made to also support Pharlap and VxWorks targets and on July 17, 2012 an additional change to support Windows 64 Bit (which is still irrelevant as the DLL is not yet released for 64 bit). I have a lot of code changes on my local system mostly to support 64 bit but also some fixes to the string encoding problem but it is all in an unfinished state and I hesitate to commit it to the repository as anyone trying to create a new distribution library of that would create a likely somewhat broken library currently. I'm also not exactly sure about the current procedure to create a new library release as well as the mentioned icon palette changes in the last release made by JGCode. I didn't see any commits of those changes to the repository. Otherwise I might have created a new release myself with the current code.
-
While this is simple it is a brute force approach. Users likely will not like that their user interface suddenly uses the "wrong" decimal point, since this setting changes the decimal sign for everything in the LabVIEW app. The better approach is to think about localization and make the code use explicit formats where necessary and leave the default where things are presented to users. For the legacy string functions you have the mentioned boolean input, for Scan From String and Format into String you have the %.; or %,; prefix to the format string which tells the function to use an explicit decimal sign. Basically anything that goes to the UI would use the system default (and no prefix in the format string), things that communicate with a device or anything similar will likely be done with the %.; prefix. This way the user gets to see the decimal numbers in whatever way he is used to and the communication with GPIB, TCP/IP and whatever devices will work irrespective of the local country settings.
-
Well imagCloseToolWindow() definitely is an IMAQ (Ni Vision) function and as such never could be located in the avcodec.dll. Seems like the linker is messing up the import tables somehow when told to optimize the import and/or export tables. Could be because of some wizardy in the NI Vision DLL but certainly seems a bug of the link stage optimizer in the compiler. Is this Visual C or LabWindows CVI?
-
Why would you call the HTTP Get.vi by reference and not put it simply inside your Active Object? You don't happen to use the same VI ref in each Active Object because this will basically serialize the calls since a VI ref is exactly one instance of the VI with one data space. Instead simply dropping the VI into your Active Object VI will propely allocate a new instance of the reentrant VI for every time you drop it on a diagram somewhere. Since the VI is set to be reentrant I would assume strongly that whatever Call Library Node is inside will be set to execute in any thread too. There might be some arbitration underneath in the DLL that could serialize the calls somewhat but I would first try to get rid of any VI server invocation before speculating to deep about non-reentrancy of otherwise properly reentrant configured VIs. Or go with websockets like Shaun suggested.
-
header and dll don't reconize fonction
Rolf Kalbermatter replied to noir_desir's topic in Calling External Code
you should only use _WIN64 if you intend to use the resulting VI library in LabVIEW for Windows 64. And for the datatypes that are size_t (the HUDAQHANDLE) you should afterwards go through the entire VI library and change all Call Library Node definitions of them to pointer sized integer manually. Since the developer of the DLL decided to change the calling notation of the functions between 32 bit and 64 bit DLL, you have to be careful to import the header file for each platform seperatly (with and without the _WIN64 declaration) and keep those VI libraries seperate and use whatever version you LabVIEW version is. It does not matter what OS bitness you have but the bitness of the LabVIEW installation you use. So if you use LabVIEW 32 bit even on Windows 64 bit, you have to leave the _WIN64 keyword away from the definitions. Going strictly with size_t=unsigned integer, will cause the handle to be defined as 32 bit integer and therefore get truncated to 32 bit on LabVIEW for 64 bit systems (and obviously corrupt the handle in that way). Setting it to strictly 64 bit integer however will pass 2 32 bit values on the stack and therefore misalign the premaining parameters on the stack. The two functions you name, are NOT declared in the header file and therefore can not be imported by the Import Library Wizard. As to the errors, the parser apparently gets a hickup on the nested enum declarations. You will have to create that as a Ring Control manually. -
LabVIEW's HTTPS support uses most likely OpenSSL for the SSL implementation. OpenSSL comes with it's own list of Root CA and does AFAIK not try to access any platform specific CA stores. As such the only options for self signed server CAs is to either skip the verification of the server certificate or to try to import the self signed certificate into the session. I think the SSL.vi or Config SSL.vi should allow to do that.
-
header and dll don't reconize fonction
Rolf Kalbermatter replied to noir_desir's topic in Calling External Code
You don't need any of those defines except _WIN32 (and possibly _WIN64) (and case is important). But you also need to define size_t. This is not a standard C datatype but one typically defined in one of the standard headers (stddef.h) of the compiler. The exact define is in fact compiler dependent but for Visual C size_t=unsigned __int64 ; for 64 bit size_t=unsigned int ; for 32 bit should do it. But I'm not sure if the import library wizard would recognize the Visual C specific type "__int64" nor if the more official "long long" would be recognized, since I don't normally use the import library wizard at all. -
LabVIEW's "hidden" decoration styles
Rolf Kalbermatter replied to Sparkette's topic in LabVIEW General
Scripting wasn't taboo, just hidden. While one could argue that adding an undocumented INI key to a configuration file might be already a violation according to the DMCA, breaking a password protection that is not super trivial to guess, definitely is and could be taken to court. Heck as I understand DMCA, doing anything that the software vendor simply has said "Don't do it!" is already a violation. And NI didn't make the DMCA so it is indeed not them who make the law. Nevertheless the law is there for them to use, only in the case of the VI password website it does not fall under the DMCA, but Germany has also some laws that go into that direction. I have a feeling that the only reason NI hasn't really done to much about this so far, is that they didn't think forcing a lawsuit after the fact is doing much good, both in terms of damage control as well as being the big bad guy going after a poor user. I'm sure they would have much less hesitation to go after a corporation trying to use this in any way and most likely have done so in the past already. -
Losing Track of a Paused Subvi !
Rolf Kalbermatter replied to curiouspuya's topic in LabVIEW General
I'm not entirely sure either but I think your VI more or less captures the issue although there might need to be some more work. Basically you can close the front panel of a subVI while it is in paused mode and that can happen easily when having a number of panels open. I for myself also tend to look for windows regularly that I had just worked on and more or less inadvertently closed. If the VI was paused it will stay paused, usually preventing at least parts of the rest of the program to continue. So I then have to go and dig for the VI that is paused to make it resume. This is not different to multithreading programming in other languages. If a thread is blocked in single step mode, the rest of the program can usually still continue (not all debuggers support that though as they lack a good way to get an overview of all current threads) and that can have various effects such as stalling other threads too who wait for events and data from the blocked thread or also just pausing whatever the thread in question should do. Without a central overview of all threads and their paused or active state you end up in a complete nighmare. The only thing about your VI that I'm not sure about is if setting the Frontmost property alone is always enough. I could imagine situations where it may be necessary to do additional work like first opening the front panel or unlocking something else.- 3 replies
-
- pause button
- subvis
-
(and 1 more)
Tagged with:
-
Actually that is not entirely true. If you have an iterable object in Java or .Net you can write somewhat more tense code. Java uses: for (<type> obj : objList) and .Net has a special keyword foreach (<type> obj in objList)
-
Not GDI+ but GDI. And it's not 10 years later but 20 years later. But you have to consider the implications. Anything 2D based in LabVIEW is done using a 2D internal API that maps to whatever drawing primitive API is available on the platform. For MacOS that was Quickdraw (and probably still is, as rewriting the code to interface to Quartz (CoreGraphics) is anything but a trivial job, for Windows that was and is GDI, and for Unix systems that is X Windows. All of them are the basic drawing APIs most applications are using even nowadays to perform UI drawing, if they don't only use standard system widgets and don't draw anything themselves. OpenGL is almost only used in applications with 3D output rendering, and GDI+ is by most applications only indirectly used since the .Net UI framework builds on that. There are very few applications that make actually directly use of any of the features that GDI+ offers. And DirectX is an API that is almost exclusively used by game engines if they don't want to have multiplatform support, otherwise they would likely go to OpenGL instead. For most anything in most applications, the basic GDI, Quickdraw, Quartz and X Windows systems are actually more than fast enough and LabVIEW makes no real difference there. The only areas where it would benefit somewhat from a faster API are possibly graphs (although I would not expect a huge speed improvement there) and the 2D picture control. But you have to weight always the effort and the benefit, as well as the possible outfall because of incompatibilities. Rewriting the 2D graphic primitive system to take advantage of newer technologies just for the Picture Control would be way to costly, changing it for anything 2D related would not bring much benefit speed wise, but be a resource intense project and cause likely all kinds of problems in terms of many bugs introduced but also much more subtle issues, such as small but sometimes also more visible differences in the visual appearance of the entire UI rendering. In short a lot of work for little visible benefit and with the likely chance to get criticisme for pixel to pixel inaccuracies in comparison to the previous implementation. Rewriting a perfectly working subsystem in an existing app, is always a very risky business and LabVIEW is no difference there. The Picture Control which is the only component in LabVIEW that would probably gain much from that is not a core component as it is only used by a few. Now there might be a rewrite at some point especially when retina type displays get the new standard and it gets more and more important to have subpixel rendering inside the app. But LabVIEW isn't a prime candidate for that and the LabVIEW developers won't take on such a project just for fun and to be able to brag about it! Also retina type support under Windows isn't really ready yet and probably quite some time away on Unix.
-
Strictly speaking, GPL indeed could be a problem as it is thought by many that the linking clause in the GPL license applies in fact also to dynamic linking, which the ODBC manager has to do no matter what, if he wants to use any ODBC driver. This is exactly the reason why LGPL was invented, which maintains all the protection of GPL on the library code but allows linking the library with other non (L)GPL based code without breaking the license. Again here some feel that LGPL only really allows dynamic linking and static linking is not really proper. I personally tend to agree with this, mostly to be on the safe side. However considering that myODBC is a shared library and the non-GPL ODBC manager in Windows will have to load it and link to it dynamically in ALL cases, the question rises of course what use the myODBC driver would have on Windows if the GPL license would not allow it to be loaded and dynamically linked to by non GPL software. So either the GPL license has to be interpreted at least in this case to intend to allow dynamic linking or the choice of the GPL license instead of LGPL by the myODBC developers is simply stupid. Unfortunately I can't find any specifics about the license of the Connector/ODBC component, just that MySQL itself is GPL, which indeed would make one assume that Connector/ODBC falls under the same license. In any case the LabVIEW Database Toolkit has no direct relation to the myODBC driver, as there lies in fact at least the Windows ODBC manager and then the Windows ODBC-to-ADO bridge in-between. So if loading and using the driver in any Windows ODBC based application is to be considered alright, then using it with the Database Toolkit has to be alright too.
-
Actually most vector based formats are somehow build like the LabVIEW Picture Control stream. Not sure about SVG but WMF/EMF or the Macintosh PICT format are like that. Most likely the performance of the Picture Control comes partly during building of the picture stream with all those Append String nodes that creates a rather bad performance and then in the rendering because it is build on LabVIEW internal routines that then map more or likely less directly to the platform API. Also the fact that it is likely mapping into good old GDI instructions under Windows rather than going into a more performing interface like GDI+ or OpenGL. But GDI+ or OpenGL was not an option in LabVIEW 2.5 when this was invented and porting it later to use these new APIs would likely cause many compatibility issues that could introduce breaking issues to existing applications. The 3D picture control (the built in one, not the ActiveX control) should be much better in terms of performance although it is not ideal for 2D vector based images but specifically meant for 3D visualization.
-
Actually many Toolkits work just fine under the Mac. It's mostly the installers that support only Windows. Installing the Toolkit on a Windows system and copying it over to a Mac works for many of them, as long as they are all VI based and don't contain external code in the form of DLLs.
-
The C runtime is mostly backwards compatible. There have been hickups both with the MS VC C runtime and also the GCC clib in the past. MS "solved" the problem by forcing the MS C runtime to be a SxS dependendy, which will load whatever version of the C runtime library that was used to compile the executable module (DLL and EXE) and in that way created a new huge problem. If a module was compiled in a different VC version, it will load a different C runtime version into the process and havoc begins, if you start passing any C runtime objects between those modules. This includes heap pointers that can not be freed in a different C runtime library than they were created, and shows that even when using just the C runtime, you have to make sure to allocate and destroy memory objects always from the same C runtime scope. But also more fundamental things like file descriptors are a problem. Basically anything that has a filedescriptor in the function interface will completely fail if made to operate on objects that were created in a different C runtime library. Also exception handling is another area that changes with every Visual C version significantly and can have nasty effects if it gets mixed. This is all rather painful and also seems mostly unnecessary if you think about the fact that almost everything in the MS C runtime library ultimately maps to WinAPI calls at some point. Also for itself they avoided this problem by making all of their Windows tools link to the original msvcrt.dll, but declaring that private after Visual C 6. The only way to make use of msvcrt.dll instead of msvcrtxx.dll is by either using Visual C 6 or older WinDDK compiler toolchains.
-
Thanks for clarifying Greg. I was pretty sure that this was the case like this, but started to wonder after Shauns reply. Other than that I do however fully agree with Shaun. DLLs are not evil but more complicated in terms of maintenance and distribution since you need one DLL/shared library for every potential target OS, and if the DLL is a LabVIEW DLL it gets even a little more involved. For that reason distributing LabVIEW created DLLs for LabVIEW users is mostly a pain in the ass and will likely annoy the LabVIEW user too, as he can't look into the driver and debug it if the need should arise. Distributing a LabVIEW DLL driver for non-LabVIEW users is however a possibility although the fact that one needs to have the correct LabVIEW runtime installed is probably going to cause some sputtering by some C/C++/whatever users. Hmm, could you clarify a bit what you were trying to do there? It's not easy to guess what you try to demonstrate from this message box and that makes it very hard to come up with some ideas to explain the behavior you see. To me this looks like a LabVIEW 2012 generated shared library that you try to load on a computer that does not have the 2012 runtime engine installed.
-
I also read some criticisme about OO. Not that it says functional programming is better than OO programming. Neither is better than the other in general AFAIC, but many OO fanatics tend to pull the OO sword for everything even if a simple functional approach would be much easier and quicker. OO has its merits but making functional programming more or less impossible like Java actually does is simply taking the idea over the top. And that are not the only issues I have with Java, but I haven't thrown it away yet.
-
There is no free ride! A DLL/shared library is always platform specific and that means for CPU architecture, OS and bitness. All three have to match for the shared library to be even loadable. That is why distributing a LabVIEW written driver as shared library is probably one of the worser ideas one can have. The same effect you get when distributing VI's without diagram. Because that is what basically is inside the shared library. And no unfortunately you can't leave the diagrams intact inside the DLL and hope that it will still work when loaded into a different version of LabVIEW eventhough the bitness or OS doesn't work. The DLL still executes in the context of the runtime engine which has no compiler or even the possibility to load the diagram into memory. The most user friendly approach is to distribute the instrument driver as LabVIEW source (I personally consider instrument drivers distributed as DLL/shared library at most as a compromise but loath it) and create a shared library from it for those non-LabVIEW users and worry about OS/bitness version and such as requests come in. There won't be any way around creating special versions of your test program that access the DLL instead of the native driver, for testing the shared library version. The upside of this is that debugging of any driver related issues during testing is MUCH easier when you leave everything as diagram, and only check after the final build that it also works as DLL. Fortunately the only one that can not be created by LabVIEW is the VxWorks shared library! But I really echo Shauns comments. If you have any chance to avoid the shared library for your LabVIEW users, you save yourself a lot of pain and sweat and make your LabVIEW users much happier too. Building multiple shared libraries after every modification of your LabVIEW code is no fun at all. And LabVIEW only creates shared libraries for the platform it is running on, so you need to have as many (virtual) OS/LabVIEW installations as you want to support platforms for, and test them each and every one as well after each build.
-
In the scenario of this thread it's not an option. But for a C DLL to be called by a LabVIEW program it is for anyone useful who gets to use my DLL in LabVIEW, ideally with an accompanying LabVIEW VI library! Also the Pointer variant while indeed a possible option is in LabVIEW in fact seldom significantly faster and quite often slower. If there is any chance for the caller to know beforehand the size of the buffer (maybe by calling a second API or by defining anyhow what data needs to be returned: data acquisition for instance) the use of caller allocated buffers passed as C pointers into the function is at least as fast or faster, since the DLL can directly copy the data into the buffer. With the DLL allocated buffer you end up in most cases with a double data copy once for the DLL when it allocates the buffer and copies its data into it and once with the MoveBlock() in the caller. So claiming that it is always faster is not correct. At least inside LabVIEW it is usually about the same speed, only with the data copy happening in one case inside the DLL and in the other in the caller. Only when the DLL can only determine the buffer size during the actual retrieval itself can it be an advantage to use DLL allocated buffers as it avoids the problem of having to potentially allocate a hugely over-sized buffer. If the potential user of the DLL is a C program then this is different. In that case returning DLL allocated buffers is indeed usually faster as you do not need the extra MoveBlock()/memcpy() call afterwards. But it's in any case a disadvantage that the API gets complicated to a level that is stretching the knowledge limits of many potential DLL users, and not just for LabVIEW Call Library Node users, as it is non-standard and also creates easy to introduce bugs in respect to resource management, because of unclear situations who needs to deallocate the buffers eventually. The returned pointer could also be a statically allocated buffer inside the DLL(often the case for strings) that would be fatal to try to free(). And another issue is that your library absolutely needs to provide an according dispose() method, as the free() function the caller might be linking too, might operate on a different heap than the malloc() function the DLL used. The only real speed improvement is when the data producing entity directly can create the managed buffers the final caller will eventually use. But C pointers don't count as such in LabVIEW since you have to do the MoveBlock() trick eventually. One more comment in respect to the calling convention. If you ever intend to create the shared library also for non-Windows platforms, C calling convention is really the only option. If you start out with stdcall now and eventually decide to create a Linux or MacOS version of your shared library you would have to either distribute different VI libraries for Windows and non-Windows platforms, or bite the bullet and change the entire VI library to C calling convention for all users, likely introducing lots of crash problems for users who find it normal to grab a DLL copy from somewhere and copy it into the system or application directory to "fix" all kind of real and imagined problems. At least there are tons of questionable links in the top google hits about DLL downloads to fix and improve the system, whenever I google for a particular DLL name. That and so called system performance scanners who offer to scan my system for DLL problems! Never tried them but I would suspect 99% of them doing nothing really useful, either containing viruses and troyans or trying to scare the user into downloading the "improved" program that can also fix the many "found" issues, of course for an obolus in the form of hard valuta.
-
Actually using fully managed mode is even faster as it will often avoid the additional memory copy involved with the MoveBlock() call.But at least in C and C++ it's only an option if you can control both the caller and callee, or in the case of a LabVIEW caller, know exactly how to use the LabVIEW memory manager functions in the shared library.
-
Unless your string can have embedded NULL bytes that should not terminate it, there should be no need to pass string parameters as byte arrays. In fact when you configure a CLN parameter to be a C string pointer LabVIEW will on return explicitedly scan the string for a NULL byte (unless you configured it to be constant) and terminate it there. This is usually highly desirable for true strings. If the buffer is however binary data that can have 0 bytes, you should indeed pass it as byte array pointer to avoid having LabVIEW scan it on return for a NULL character.
-
No you can't avoid having to preallocate the strings and arrays by the caller if you want to make everything like you imagine. There is no way the LabVIEW DLL can allocate C string or array pointers and return them to the caller, without limiting the caller to only use very specific deallocator functions provided by your library too, or to never link with a different C runtime library than the one you used (that doesn't just mean a specific type of compiler but even a spedific C runtime version, even down to the last version digit when using side by side (SxS) libraries, which all Visual C versions since 2005 do). This is the old problem of managed code versus unmanaged code. C is normally completely unmanaged! There exists no universally accepted convention for C that would allow to allocate memory in one place and deallocate it in another place without exactly knowing how it was allocated. This requires full control of the place where it gets allocated as well as where it gets deallocated. And if that is not both in the caller, that is seldom the case and usually also perverts the idea of libraries almost completely. The only way to not have the caller to preallocate the arrays (and strings) is to have a very strict contract (basically this is one main aspect of what managed code means) in both the caller and callee about how memory is allocated and deallocated. This happens for instance in DLLs that are specifically written to handle LabVIEW native datatypes, so LabVIEW as a caller does not have to preallocate buffers to unknown sizes and the DLL can then allocate and/or resize them as needed and pass them back to LabVIEW. In this case the contract is that any variable sized buffer is allocated and deallocated exclusively by LabVIEW memory manager functions. This works as long as you make sure there is only one LabVIEW kernel mapped into the process that does this. I'm not entirely sure how they solved that, but there must be a lot of trickery when loading a LabVIEW DLL created in one version of LabVIEW into a different version of LabVIEW to make sure buffers are allocated by the same memory manager when using native datatypes. But enforcing to use LabVIEW manager functions to your C users so you can pass LabVIEW native datatypes as parameter is not an option either, since there is no officially sanctioned way to call into the runtime system used by the LabVIEW DLL from a non LabVIEW process. Also your C programmers would likely spew poison and worse if you tell them they have to call this and this function exactly in such a way to prepare and later deallocate the buffers needed, using some obscure (to them) memory manager API. This is not even so much bad intention by NI and the LabVIEW developers but simply how programming works. The only universally safe way of calling functions with buffers is to both allocate and deallocate them in the caller. Anything else requires a very strict regime about memory manager calls to use, that can work if designed in the programming framework from scratch (C#) for instance, but C and C++ existed long before there was any programming framework that would care about such things, and many programmers have attempted to add something to C and C++ like that later, but each came up with a different interface and each of them always will remain an isolated solution not accepted by the huge majority of other C and C++ users. Basically if you want to go the path you described you will have to bite the sour apple and use C pointers for arrays and strings, and require the caller to preallocate those buffers properly.
-
You would like to receive absolution to use the XNodes, despite all the well known comments out there. The only people who can really do that, most likely won't as they are not allowed to do that and we can't other than the few who tried it. Working on them seems a rather crash intense affair, using them seems a bit more safe, but the mileage may vary greatly depending on LabVIEW version, OS and what else, including the position of the moon. What I can safely say is, that there is absolutely no guarantee, that XNodes will not work better or worse in future versions of LabVIEW. They may be improved, left to code rot that will cause more crashes in newer versions, or eventually discontinued entirely and removed from future LabVIEW releases. As such I would consider it a totally irresponsible decision to use them for anything but private experiments.
-
So I've been fighting a bit over the weekend with this and came across a multitude of issues. The first one is that most ZIP utilities at least on Windows, seem to use the OEM codepage to store ASCI information in the ZIP archive, where as LabVIEW as a true GUI application uses of course the default (ASCI codepage). Both are set depending on the language setting in the International Settings control panel but are usually totally different codepages, with similar character glyphs but typically at entirely different code positions. In addition ZIP files have a feature to store the file name as UTF8 string in the archive directory. So far so good. Implementing the correct translation from the LabVIEW ASCI codepage to the OEM codepage and back is fairly trivial on Windows, a bit more complicated on MacOSX and only with limited accuracy since the Mac uses traditionally somewhat different character translation tables than Windows. On Linux it is a complete impossibility without linking to external libraries like iconv, which might or might not be available on particular Linux distributions! So I'm a bit in a limbo here how to go about this, because adding an entire codepage translater into LVZIP for non Windows targets seems like a rather bad overkill. While investigating this I also found another issue entirely independent of LVZIP. Suppose you have a file on your disk with a filename that contains characters not present in the current ASCI codepage of your Windows system! There seems absolutely no way to access this file from within LabVIEW since the LabVIEW path uses internally MultibyteCharacters based on the current ASCI codepage, and if a filename contains characters not present in the current ASCI codepage the LabVIEW path will not be able to represent this filename at all. In case you wonder why such filenames could even exist: unless you use an old FAT file format on your Windows system the filenames are really stored in UTF-16 in the filesystem and Windows Explorer is fully Unicode compliant, so those files happily can exist on the disk and get displayed by Explorer, but not accessed by LabVIEW. And in case you wonder if this is an issue in non Windows systems: On Linux definitely not nowadays since all modern Linux systems use UTF-8 as encoding and it seems LabVIEW also uses whatever is the default Multibyte encoding on the OS, which would be UTF-8 in those cases. For MacOSX I'm not entirely sure since there are about umtien different possible APIs to access the filesystem, depending if you go Carbon, Cocoa, Posix or any mix of it, each of them has its own particular limits and specialties. I really wish they would have made the Path format use UTF-16 internally on Windows long ago and avoid such problems altogether, possibly translating the path to a multibyte encoding when needing to flatten a path in order to keep the Flattened format consistent. But at least all existing filepaths on the disk would be valid then within an application. As it is now, the flattened path isn't really standardized in any way anyhow, as it is flattened to whatever local multibyte setting the OS is configured for, on Windows that's one of the local codepages while on Linux and possibly Mac that's UTF-8 nowadays. So passing a Path through VI server between different LabVIEW installations will run into problems already between different platforms and even between Windows versions using different country locales. Making it all consistently UTF-8 in a flattened format would not really make this worse but rather improve the situation, with one single drawback: Flattened paths on Windows systems stored in older versions of LabVIEW would not automatically be compatible with LabVIEW versions using UTF-8 for flattened paths. Basically I would like to know two things here: 1) First what is the feeling about support for translation of the filename strings on non-Windows systems? Is that important and how much effort is it worth? Consider that support for such translation on embedded targets like VxWorks would be absolutely only possible with the addition of a codepage translater to LVZIP. 2) Has anyone run into trying to access filenames containing characters that the current Windows multibyte table did not support and if so what solution did you choose?
-
Primitives are not stored as an entity on disk, but are directly created by code residing in LabVIEW.exe. The LabVIEW menu palettes then hold links to those pirimitives. Creating primitives not made availabe through the menu palettes is a function of the Create Object primitive that is part of the scripting palette extension. This node has a ring input that contains all the primitives and controls, the LabVIEW executable contains internally.