-
Posts
3,892 -
Joined
-
Last visited
-
Days Won
267
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Beckhoff TwinCAT vs LabVIEW + compactRIO compare and contrast
Rolf Kalbermatter replied to MarkCG's topic in LabVIEW General
We did indeed have some FPGA operation in there including some quadrature encoding and debounce circuitry. The main reason it was done with the NI chassis was however that they were already there from an earlier setup and it was considered to be cheaper to use them rather than buy Beckhoff IO terminals. -
Direct read/write of Transfer Buffer possible?..
Rolf Kalbermatter replied to dadreamer's topic in LabVIEW General
That's a terrible hack. LabVIEW controls are not pointers. They are complex objects whose data elements can be dynamically allocated, resized and deallocated. It may work for a while but is utterly susceptible to LabVIEW versions, platforms, and what else! DCO/DDO concept is quite something else than the actual memory layout. It's related but there is no hard rule that says that the DCO or DDO layout in memory can't change and it has changed in the past. CIN support is utterly legacy. There are no tools to create CINs for any of the new platforms that have been released since around LabVIEW 8.2. That includes all the 64-bit platforms as well as the NI Linux RT platforms (both ARM and x64). One very bad aspect of CINs is that the code resource is platform specific and needs to be inside the VI and there is only space for one code resource per VI per CIN routine. So if you want to support multiplatform you have to create for each platform a copy of the VI and put the according code resource in there and then somehow place the correct VI on the system when installing your library to a particular platform. Utter maintenance nightmare! -
Direct read/write of Transfer Buffer possible?..
Rolf Kalbermatter replied to dadreamer's topic in LabVIEW General
That makes no sense. But I"m not going to tell you you can't do that. 😀 The control value is completely seperated from the data value in the wire. Assuming that they are the same is utterly useless. There might be circumstances where they appear to be the same but threating them like that would simply cause potential trouble. You can NOT control LabVIEWs memory mnagement on such a level. LabVIEW reserves the right to reschedule code execution and memory reallocations depending on seemingly minimal changes and it also can and will change between versions. The only thing you can say is that in a particular version without any change to the diagram you SHOULD get the same result but anything else is simply not safe to assume. Trying to use this for a (possibly perceived) performance gain is utterly asking for a lot of pain in the long run. So don't do it! -
Direct read/write of Transfer Buffer possible?..
Rolf Kalbermatter replied to dadreamer's topic in LabVIEW General
My point is, that for the retrieval of the pointer that a handle is, the two SPrintf() calls are utterly Rube Goldberg. They simply return the pointer that a handle is. If you passed the array as Array Handle, Pass Handle by value to the first MoveBlock() function you would achieve the same! Yes I mean the data parameter. In your diagram you pass the U64 value of the control/indicator to it and declare the parameter as pointer sized integer (passed by reference?). But it should be the data type of the control passed by reference (so U64 passed by reference). For more complex values like clusters and arrays it is probably best to configure this parameter simply as Adapt to Type (pass handles by reference). CINS always passed data by reference. There was no way to configure this differently. There was also no way to tell LabVIEW that a parameter was const or not other than by the fact if the output (right) terminal was connected. For the Call Library node you have a Constant checkbox in the parameter declaration. This is a hint to LabVIEW that the DLL function will NOT modify (stomp on) the data and LabVIEW is free to schedule code in such a way to first execute this CLN before other functions that may want to modify data in place. If only one sink to a wire is marked as stomper (wanting to modify the data), LabVIEW can save copying the data to pass to the different nodes by making sure to schedule the non stomping nodes first (and even in parallel if possible) before finally executing the one stomper node. Even if there are multiple stomper sinks on a single wire, it will schedule them such that all non-stomping nodes are executed first if possible from the dataflow, and then create n - 1 data copies to pass to the different stomper nodes (with n being the number of nodes that are indicating that they MIGHT stomp on the data). Yes this might be overzealous if the node decides to not stomp on the data anyways, but LabVIEW always tries to work by the principle to be better safe than sorry in this respect (and has failed in the past sometimes in some cases but that are bugs the LabVIEW team wants to have squished ASAP when they get aware of them). -
Beckhoff TwinCAT vs LabVIEW + compactRIO compare and contrast
Rolf Kalbermatter replied to MarkCG's topic in LabVIEW General
I have only done minimal coding in TwinCAT. Been involved on the side with such a project where we had to provide support to get a few NI EtherCAT chassis to work in a system with Beckhoff controllers, Bronkhorst Flow Controllers and several other devices. While the Bronkhorst, Festo and NI IO chassis were on EtherCAT there were other devices that were using Modbus and Ethernet communication and that turned out to be more complex than initially anticipated to get working in TwinCAT. I was happy to sit on the side and watch them trying to get things eventually solved rather than getting my own hands dirty with TwinCAT programming. 😀 -
Beckhoff TwinCAT vs LabVIEW + compactRIO compare and contrast
Rolf Kalbermatter replied to MarkCG's topic in LabVIEW General
If your external hardware is EtherCAT based, then Beckhoff will be quite a bit easier to use. If it is heterogenous then IMHO LabVIEW tends to work better, but that is also probably my significantly greater experience with LabVIEW and all kind of weird external hardware in comparison with Beckhoff TwinCAT. -
Direct read/write of Transfer Buffer possible?..
Rolf Kalbermatter replied to dadreamer's topic in LabVIEW General
That's all nice and pretty until you place an Array Resize node in the array wire to increase the array size. Et voila, the internal pointer most likely (not necessarily always) will change as LabVIEW will have to reallocate a new memory area and copy the old content into the new area and deallocate the original memory. So while this is no real news to people familiar with the LabVIEW memory manager C function interface, it is at best a brittle approach to rely on. When you write a C DLL that receives handles from LabVIEW, the handle is only guaranteed to exist for the duration of the function call and after you return control to the LabVIEW diagram, LabVIEW reserves the right to resize, delete, or reuse that array handle for other things as it sees fit. Your array_test.vi is a very Rube Goldberg solution for something that can be solved with a simple Resize Array node. What you basically do is to format the handle value (which is a pointer to the actual memory pointer) into text, then convert that text back into a pointer (handle) and then resize it with NumericArrayResize() and finally copy in the data into that resized handle. It's equivalent to doing a Resize Array on the original array and then a copy into that resized array, although in LabVIEW you wouldn't resize the handle for this but simply create a new one with that data, most easily by branching the wire but if you really want to, you could also use an autoindexing loop to make it a little bit more Rube Goldberg. I also kind of question the datatype of the third parameter for your ReadDCOTransferData and WriteDCOTransferData. As you set it, you treat it as a pointer (most likely passed by reference) but it should probably be the datatype of the control (passed by reference) (in this specific case the easiest would be then to simply configure it as Adapt to Type). -
I wouldn't expect real problems nor any specific advantages to doing so for these functions. The Call Library Nodes are already configured to run in any thread which allows the VI to execute the C function in whatever thread the VI is currently executing. As such it should not give you any serious performance improvements unless you intend to call these functions from many different locations in parallel and also only if you do this for pretty large string buffers. For short string buffers the actual execution time of these functions should be so short that the likelihood that one call has to wait for another call should be pretty small. The disadvantage obviously is that you will have to redo this for every new version of the library. 😀
-
Most likely when you pass in -1, it does a stat() (or possible the equivalent of the internal FSGetSize() function) to determine the size of the "file" and read that much bytes. Since it is a VFS it can't return a valid size for the "file" (most likely fills in 0) and LabVIEW concludes that it is an empty file and returns that.
-
TCP_NODELAY.llb library
Rolf Kalbermatter replied to Mark Yedinak's topic in Remote Control, Monitoring and the Internet
Recently? OpenSSL does that all the time. There have been incompatible API changes all the time. Some between 0.9 and 1.0. a few more serious ones between 1.0 and 1.1 including renaming of the shared library itself to "mitigate" the version incompatibility problem. And expect some more when they go to 3.0. Wait 3.0? what happened to 2.x? 😀 And when you look at their examples they are riddled with #if OpenSSLVer <= 0968 call this API #else call this other API #endif That's definitely not going to work well unless you always can control exactly which version of OpenSSL is installed on a computer for your specific app and then you are stuck with doing all the maintenance yourself when a new version is released. I sort of managed to make the code of my library autodetect changes between 0.9 and 1.0 and adapt dynamically but definitely gave up when they started 1.1. That together with the fact that IPv6 and TLS support beyond what the LabVIEW HTTP VIs offered wasn't even any priority anywhere. Now with 1.0.2 definitely gone in obsolete mode my library wouldn't even compile properly for the time being. 😀 PS: Just recently read somewhere that IPv4 address range has been now officially depleted, so no new IPv4 address ranges can be given out anymore. This still isn't the end of the internet as many internet providers use dynamic IP address assignment and nobody should be even thinking about connecting his coffee machine or fridge directly to the internet 😂 but it definitely shows that IPv6 support by internet providers should be something they care about. But while we here in the Netherlands have one of the highest internet connectivity rates of the world, the majority of internet providers still doesn't provide a "working" IPv6 connectivity itself. You have to use tunnels to test that! -
TCP_NODELAY.llb library
Rolf Kalbermatter replied to Mark Yedinak's topic in Remote Control, Monitoring and the Internet
You are aware that there is a LabVIEW Idea Exchange entry about SSL TLS support in LabVIEW that is since about 6 month in development. Most likely not something to appear in LabVIEW 2020 though. I was considering reviving my library but when I saw that I abandoned the idea. -
TCP_NODELAY.llb library
Rolf Kalbermatter replied to Mark Yedinak's topic in Remote Control, Monitoring and the Internet
Depends how you implement them but to support asynchronous behaviour is a little nasty as you need to basically create a temporary structure that gets initialized on first call and then you need to call repeatedly into the shared library function to call poll() or select() to check the status of the socket and update the structure until the operation completes either because an error occured, the timeout elapsed or the requested data has been sent/received. Without that your call will be synchronous, consuming the calling LabVIEW thread and limiting potential parallelization which the native node does support out of the box. Synchronous operation is not a problem as long as you only process one or two connections at the same time but will certainly cause problems for server application or clients that need to process many seperate connections quasy parallel. Why I know that? I did the asynchronous implementation in this library: -
TCP_NODELAY.llb library
Rolf Kalbermatter replied to Mark Yedinak's topic in Remote Control, Monitoring and the Internet
That's not a publically exported API unfortunately. This is an internal function called by the NCConnect(), NCCreateListerer() and NCWaitOnListener() functions when a socket needs to be wrapped into a network refnum. And no, the according APIs with these names that are exported are just stubs that return an error. The real implementation is in non-exported functions too, as someone back in the LabVIEW 4.0 or 5.0 days decided that the Network Manager API should not be exported from the LabVIEW kernel for some reasons. Most probably a left over from the LabVIEW for Mac days when the TCP IP library was an external library implemented with CINs. Rather than just removing the functions from the export table they renamed them internally and all network functionality uses those new names and empty stubs returning "Manager Call not Supported" status were exported instead. -
Sounds like a home work assignment. And not a very complicated one when you have done some basic LabVIEW programming course. You will want to look into loops with shift registers for this.
-
The solution to this is usually to cache the path in a shift register (or feedback node) once it has been evaluated and skip new evaluation when the path is valid.
- 172 replies
-
[CR] MPSSE.dll LABview driver
Rolf Kalbermatter replied to Benoit's topic in Code Repository (Uncertified)
This is a bit late for you but for others here is some info. While that DLL does have a ReadWrite function for SPI operation it has no such function for the I2C part of the functions. One thing you could try is to change the threading of the Call Library Node for the Read and Write function to be run in any thread. This should save some time in invoking the function as no thread switch has to be performed and the execution for of the DLL function has not to arbitrate for the CPU with other UI related things in LabVIEW. While the documentation does not state that it is safe to call these functions from different threads it could be sort of inferred from the fact that it does state for the functions GetNumChannels() and GetChannelInfo() that it is not safe to do that, while this note is missing for the Read and Write functions. Of course it is very likely not a good idea to let LabVIEW run the Read and Write in parallel on the same communication handle as it is not only not clear which one would execute first but also might mess up the management information stored inside the handle itself if those two functions are executing simultaneously. So make sure there is proper data dependency between the Read and Write function (usually through error wire) to avoid that problem. It's questionable that you will get guaranteed 10kS data transfer with this but you should be considerably faster than with the current VI setup. Personally if I had to use this in a real project I would most likely implement the I2C and SPI operation entirely in LabVIEW directly on top of the FTD2xx library. -
[CR] MPSSE.dll LABview driver
Rolf Kalbermatter replied to Benoit's topic in Code Repository (Uncertified)
RomainP already mentioned that change in his post here: - conversion from and to string for "Data" deleted => Be able to receive also bytes with a value of 0. Another useful change not mentioned elsewhere would be to change all the Call Library Nodes to use a Pointer Sized Unsigned Integer variable for the handle parameter and change the controls on the VI frontpanel for the handle to be a 64-bit Unsigned Integer, in order to make the library more future proof to also work with the 64-bit version of the DLL. -
TCP Wait On Listener - will not timeout.
Rolf Kalbermatter replied to Iron_Scorpion's topic in LabVIEW General
It probably is seeing the arriving connection but with your resolve Remote address (T) left unwired it will start a DNS query to resolve the assiciated IP address to a domain name and if your DNS setup is not up to snuff that can hang until it eventually will timeout. And that timeout for the DNS resolution is unfortunately not timeout controllable by the caller but inherent in the socket library. Do you need a domain name or is the IP address enough? Try to set this input to false and see then. The Wait for Listener is NOT waiting for any data to arrive on the incoming connection. It accepts() the connection, which sends the ack and then because you told it to resolve the remote address does start a DNS resolution inquiry for the IP address and that is where it hangs for you. It of course can only return after the DNS resolution has returned control to the routine, successful or not. The data your remote side sends will have to be handled by a TCP Read that gets the refnum returned from Wait for Listener. -
Thank you for this report. It is not intentional at all but rather a potential problem as the two versions are not meant to be used alongside with possible cross linking causing many nasty problems. I'm still busy working on 4.3 which will be a full release and have in the underlaying levels many completely incompatible changes, so it will be important to make sure it replaces any other versions in an exesting install (not machine wide but inside a specific LabVIEW installation). So if you can let me know about the actual changes you made I will try to incorporate that. I wasn't aware that the Package Name/Display Name was considered an identifying package identification feature and only assumed that the Package Name/Name was used, (which would make more sense).
-
Neat trick but that's for .Net assemblies only and won't work for standard DLLs, will it? I use Dependency Checker for that. It's an old tool that hasn't been updated to deal with some newer Windows features very well and gets somewhat confused about DLLs that are of different bitness than itself, but it is usually enough to see the non standard system DLLs and its dependencies. Anything like kernel32.dll etc you shouldn't have to worry about anyhow.
-
I think it is a little far fetched to request full VIPM feature completeness from any new package management system. But NIPM was more an NI attempt to do something along the lines of MSI than VIPM despite their naming choice and marketing hype. As such it hasn't and still doesn't really support most things that VIPM (and OGPM before) where specifically built for. It was just NI's way to ditch the MSI installer technology which got very unwieldy and can't really handle NI software package installs well (I find it intolerable that a software installation for the LabVIEW suite with moderate driver support takes anything from 4 to 6 hours on a fairly modern computer. That's some 6GB of software to install and shouldn't take that long ever!) Also MSI can't ever be used for support realtime installations as it is totally built on COM/ActiveX technology which is a total no go for anything but Windows platforms. Unfortunately NIPM had and still has many problems even in the area where it was actually built for. It doesn't often work well when you try to install a new LabVIEW version alongside a previous one. That was something that worked at least with the old MSI installers pretty well. And yes despite its name it hasn't really any VIPM or GPM capabilities specifically targetted at LabVIEW library distributions and the configuration part to build such packages is very lacking.in the I suppose they were eventually planning on making NIPM the underlaying part of installing software through System Link and not have it any user facing interface at all, but had to do something in the short term. As it is it doesn't look to have enough features for anything but for NI to distributing their software installers.
-
ITreeView activex library
Rolf Kalbermatter replied to Bjarne Joergensen's topic in Calling External Code
If it is in SysWOW64 then it is a 32-bit library. System32 contains 64-bit binaries on 64-bit Windows systems. On 32-bit systems System32 contains 32-bit binaries and SysWOW64 doesn't exist! -
Huuu? If that was like this you wouldn't need a string control, a text table, and just about any other control except booleans and numerics has some form of text somehwere by default, or propertys to change the caption of controls, axis labels, etc. etc. In that case adding Unicode support would indeed have been a lot easier. But the reality is very different! Also your conversion tool might have been pretty cool, but converting between a few codepages wouldn't have made a difference. If you have a text that comes from one codepage you are bound to have characters that you simply can't convert into other codepages so what to do about that? LabVIEW for instances links dynamically to files based on file names. How to deal with that? The OS does one type of conversion depending on the involved underlaying filesystems and a few other parameters and there is only no loss if both filesystems support fully Unicode and any transfer method between the two is fully Unicode transparent. That certainly wasn't always true even a few years back. Then the Unicode (if it was even fully preserved) is translated back to whatever codepage the user has set his system to for applications like LabVIEW which use the ASCI API. Windows helpfully translates every character it can't represent in that codepage into a question mark except that that is officially not allowed to be used in path names. LabVIEW stores filenames in its VIs and then if LabVIEW would use a self cooked conversion it would be bound to have some different conversions than what Windows or your Linux system might come up with. Even the Windows Unicode translation tables contained and still contain diversions from the official Unicode standard. They are not fully transparent when compared for instance to implementations like ICU or libconv. And they probably never will completely be because Microsoft is bound to legacy compatibility just as much and changing things now would burn some big customers. And that is just the problem of filenames. There are many many more such areas where there are no really clean solutions for. In many cases no solution is better than a halfbacked one that might make you feel safe only to let you fall badly on your nose. The only fairly safe solution is to go completely Unicode. Any other solution falls either immediately flat on its nose (e.g. codepage translation) or has been superseeded by Unicode and is not maintained anymore. That's the reality. And just for fun even Unicode can be tricky when it comes to collation for instance. Comparing strings just on codepoints is for instance a sure way to fail as you have so called non-forwarding codepoints that combined with other codepoints can form a character. Except that Unicode for many of these characters also contains single codepoints. Looking at the binary representation the strings surely look different, but logically they are not! I'm not even sure Windows uses any collation when trying to locate filenames. If it doesn't it might be unable to find a file based on a path name eventhough the name stored on disk and visible for instance in Explorer looks textually exactly the same than the name you passed to the file API. But without proper collation they simply are not and you would get a File Not Found error! WTF the file is right there in Explorer! As to solving encoding when interfacing to external interfaces (instrument control, network, file IO, etc etc) there are indeed solution to that, by specifying an encoding at these interfaces. But I haven't seen one that really convinced me to be easy to use. Java (and even .Net which was initially just a not-a-Sun version of Java from Microsoft) for instance uses a string to indicate the encoding to use but that string has been traditionally not very well defined and there are various variants that mean basically the same but look very different and the actual real support that comes standard with Java is pretty limited since it has to work on many different platforms that might have very little to no native support for this. .Net has since become a lot more support but that hasn't made it simpler to use. And yes the fact that LabVIEW used to be multiplatform didn't make this whole business any easier to deal with. While you could sell to customers that ActiveX and .Net simply was technically impossible on other platforms than Windows, that would't have fared well with things like Unicode support and many other things. Yet the underlaying interfaces are very different on the different platforms and in some cases even conceptually different.
-
That's not as easy even if you leave away other platforms than Windows. In old days Windows did not have support preinstalled for all possible codepages and I'm not sure it does even nowadays. Without the necessary translation tables it doesn't help if you know what codepage text is stored in so translation into something else is not guaranteed to work. Also the codepage support as implemented in Windows does not allow you to display text in a different codepage than what is currently active and even if you could switch the current codepage on the fly all text previously printed on screen in another codepage would suddenly look pretty crazy. While Microsoft had support for Unicode initially only for the Windows NT platform, (which wasn't initially supported by LabVIEW at all) they only added a Unicode shim to the Windows 9x versions (which were 32 bit like Windows NT but with a somewhat Windows 3.1 compatible 16/32 bit kernel around 2000 by a special Library called Unicows (Probably for Unicode for Windows Subsystem) that you could install. Before that Unicode was not even available on Windows 95. 98 and ME, which was the majority of platforms LabVIEW was used on after 3.1 was kind of dieing. LabVIEW on Windows NT was hardly used despite that LAbVIEW was technically the same binary than for the Windows 9x versions. But the hardware drivers needed were completely different and most manufacturers other than NI were very slow to start supporting their hardware for Windows NT. Windows 2000 was the first NT version that saw a little LabVIEW use and Windows XP was the version where most users definitely abandoned Windows 9x and ME for measurement and industrial applications. That only would have worked if LabVIEW for Windows would use internally everywhere the UTF-16 API, which is the only Windows API that allows to display any text on screen independent of codepage support, and this was exactly one of the difficult parts to get changed in LabVIEW. LabVIEW is not a simple notepad editor where you can switch the compile define UNICODE to be defined and suddenly everything is using the Unicode APIs. There are deeply ingrained assumptions that entered the code base in the initial porting effort that was using 32-bit DOS extended Watcom C to target the 16-bit Windows 3.1 system that only had codepage support and no Unicode API whatsover and neither had the parallel Unix port for the Sun OS, which was technically Unix SRV4 but with many special Sun modifications, adaptions and special borks built in. It still allowed eventually to release a Linux version of LabVIEW without having to write an entirely new platform layer but even Linux didn't have working Unicode code support initially. It took many years before that was sort of standard available in Linux distributions and many more years before it was stable enough that Linux distributions started to use UTF-8 as standard encoding rather than the C runtime locals so nicely appreaviated with EN-en and similar which had no direct mapping to codepages at all. But Unix while not having any substantial Unicode support for a long time eventually went a completely different path to support Unicode than what Microsoft had done. And the Mac port only learned to have useful Unicode support after Apple eventually switched to their BSD based MacOS X. And neither of them really knew anything about codepages at all so a VI written on Windows and stored with the actual codepage inside would have been just as unintelligent for those non-Windows LabVIEW versions as it is now. Also in true Unix (Linux) way they couldn't of course agree on one implementation for a conversion API between different encodings but there were multiple competing ones such as ICU and several others. Eventually the libc also implemented some limited conversion facility although it does not allow you to convert between arbitrary encodings but only between widechar (usually 32-bit Unicode) and the currently active C locale. Sure you can change the current C locale in your code but that is process global so it also affects how libc will treat text in other parts of your program which can be a pretty bad thing in multithreading environments. Basically your proposed codepage storing wouldn't work at all for non-Windows platforms and even under Windows only has and certainly had in the past very limited merit. You reasoning is just as limited as the original choice of NI was when they had to come up with a way to implement LabVIEW with what was available then. Nowadays the choice is obvious and UTF-8 is THE standard to transfer text across platforms and over the whole world but UTF-8 only got a viable and used feature (and because it was used also a tested, tried and many times patched one to work as the standard had intended it) in the last 10 to 15 years. At that time NI was starting to work on a rewrite of LabVIEW which eventually turned into LabVIEW NXG.
-
No Classic LabVIEW doesn't and it never will. It assumes a string to be in whatever encoding the current user session has. That's for most LabVIEW installations out there codepage 1252 (over 90% of LabVIEW installations run on Windows and most of them on Western Windows installations). When LabVIEW classic was developed (around end of the 80ies of the last century codepages was the best thing out there that could be used for different installations and Unicode didn't even exist. The first Unicode proposal is from 1988 and proposed a 16 bit Unicode alphabet. Microsoft was in fact an early adaptor and implemented it for its Windows NT system as 16 bit encoding based on this standard. Only in 1996 was Unicode 2.0 released which extended the Unicode character space to 21 bits. LabVIEW does support so called multibyte character encodings as used for many Asian codepages and on systems like Linux where nowadays UTF-8 (in principle also simply a multibyte encoding) is the standard user encoding it supports that too as this is transparent in the underlaying C runtime. Windows doesn't let you set your ANSI codepage to UTF-8 however, otherwise LabVIEW would use that too (although I would expect that there could be some artefacts somewhere from assumptions LabVIEW does when calling certain Windows APIs that might not match how Microsoft would have implemented the UTF-8 emulation for its ANSI codepage. By the time the Unicode standard was mature and the various implementations on the different platforms were more or less working LabVIEW's 8-bit character encoding based on the standard encoding was so deeply engrained that full support for Unicode had turned into a major project of its own. There were several internal projects to work towards that which eventually turned into a normally hidden Unicode feature that can be turned on through an INI token. The big problem with that was that the necessary changes touched just about every code in LabVIEW somehow and hence this Unicode feature is not always producing consistent results for every code path. Also there are many unsolved issues where the internal LabVIEW strings need to connect to external interfaces. Most instruments for instance won't understand UTF-8 in any way although that problem is one of the smaller ones as the used character set is usually strictly limited to ASCII 7-bit and there the UTF-8 standard is basically byte for byte compatible. So you can dig up the INI key and turn Unicode in LabVIEW on. It will give extra properties for all control elements to set them to use Unicode text interpretation for almost all text (sub)elements instead but the support doesn't for instance extend to paths and many other internal facilities unless the underlaying encoding is already set to UTF-8. Also strings in VIs while stored as UTF-8 are not flagged as such as non Unicode enabled LabVIEW versions couldn't read them, creating the same problem you have with VIs stored on a non Western codepage system and then trying to read them on a system with a different encoding. If Unicode support is an important feature for you, you will want to start to use LabVIEW NXG. And exactly because of the existence of LabVIEW NXG there will be no effort put in LabVIEW Classic to improve its Unicode support. To make it really work you would have to rewrite large parts of the LabVIEW code base substantially and that is exactly what one of the tasks for LabVIEW NXG was about.