Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 12/05/2019 in all areas

  1. That's not as easy even if you leave away other platforms than Windows. In old days Windows did not have support preinstalled for all possible codepages and I'm not sure it does even nowadays. Without the necessary translation tables it doesn't help if you know what codepage text is stored in so translation into something else is not guaranteed to work. Also the codepage support as implemented in Windows does not allow you to display text in a different codepage than what is currently active and even if you could switch the current codepage on the fly all text previously printed on screen in another codepage would suddenly look pretty crazy. While Microsoft had support for Unicode initially only for the Windows NT platform, (which wasn't initially supported by LabVIEW at all) they only added a Unicode shim to the Windows 9x versions (which were 32 bit like Windows NT but with a somewhat Windows 3.1 compatible 16/32 bit kernel around 2000 by a special Library called Unicows (Probably for Unicode for Windows Subsystem) that you could install. Before that Unicode was not even available on Windows 95. 98 and ME, which was the majority of platforms LabVIEW was used on after 3.1 was kind of dieing. LabVIEW on Windows NT was hardly used despite that LAbVIEW was technically the same binary than for the Windows 9x versions. But the hardware drivers needed were completely different and most manufacturers other than NI were very slow to start supporting their hardware for Windows NT. Windows 2000 was the first NT version that saw a little LabVIEW use and Windows XP was the version where most users definitely abandoned Windows 9x and ME for measurement and industrial applications. That only would have worked if LabVIEW for Windows would use internally everywhere the UTF-16 API, which is the only Windows API that allows to display any text on screen independent of codepage support, and this was exactly one of the difficult parts to get changed in LabVIEW. LabVIEW is not a simple notepad editor where you can switch the compile define UNICODE to be defined and suddenly everything is using the Unicode APIs. There are deeply ingrained assumptions that entered the code base in the initial porting effort that was using 32-bit DOS extended Watcom C to target the 16-bit Windows 3.1 system that only had codepage support and no Unicode API whatsover and neither had the parallel Unix port for the Sun OS, which was technically Unix SRV4 but with many special Sun modifications, adaptions and special borks built in. It still allowed eventually to release a Linux version of LabVIEW without having to write an entirely new platform layer but even Linux didn't have working Unicode code support initially. It took many years before that was sort of standard available in Linux distributions and many more years before it was stable enough that Linux distributions started to use UTF-8 as standard encoding rather than the C runtime locals so nicely appreaviated with EN-en and similar which had no direct mapping to codepages at all. But Unix while not having any substantial Unicode support for a long time eventually went a completely different path to support Unicode than what Microsoft had done. And the Mac port only learned to have useful Unicode support after Apple eventually switched to their BSD based MacOS X. And neither of them really knew anything about codepages at all so a VI written on Windows and stored with the actual codepage inside would have been just as unintelligent for those non-Windows LabVIEW versions as it is now. Also in true Unix (Linux) way they couldn't of course agree on one implementation for a conversion API between different encodings but there were multiple competing ones such as ICU and several others. Eventually the libc also implemented some limited conversion facility although it does not allow you to convert between arbitrary encodings but only between widechar (usually 32-bit Unicode) and the currently active C locale. Sure you can change the current C locale in your code but that is process global so it also affects how libc will treat text in other parts of your program which can be a pretty bad thing in multithreading environments. Basically your proposed codepage storing wouldn't work at all for non-Windows platforms and even under Windows only has and certainly had in the past very limited merit. You reasoning is just as limited as the original choice of NI was when they had to come up with a way to implement LabVIEW with what was available then. Nowadays the choice is obvious and UTF-8 is THE standard to transfer text across platforms and over the whole world but UTF-8 only got a viable and used feature (and because it was used also a tested, tried and many times patched one to work as the standard had intended it) in the last 10 to 15 years. At that time NI was starting to work on a rewrite of LabVIEW which eventually turned into LabVIEW NXG.
    1 point
  2. There has been a lot of discussion, which is great, but I feel the need to reiterate GCentral's vision and mission. GCentral envisions a LabVIEW community making the best version of itself by improving its capability through collaboration. GCentral is a non profit organization: for programmers who need to find, share or collaborate on G reusable code or software engineering tools. that provides a platform for G code packages and collaboration resources. that is independent and driven by community experts. GCentral's Mission Enable LabVIEW programmers to collaborate by removing barriers to finding / using code designed for reuse (packaged code) removing barriers to contributing code designed for reuse (packaged code) removing barriers to co-developing code using code with confidence GCentral is package technology agnostic / SCC agnostic GCentral does not endorse or encourage the use of one package manager over the other, nor will we. Each community member can package their code according to their preference. GCentral does not endorse or encourage the use of one Source Code Control Provider (local or cloud based) over the other nor will we. Each community member can use the SCC they prefer. GCentral will ease the pain we all feel when attempting to find and use packages by index the currently available public repositories (Tools Network, GPM, JKI Tools, NI Packages) by indexing an new, un-gated, cloud based storage location that can house any package type. by displaying the index results in a web page / APIs, etc (see https://www.gcentral.org/ for the prototpye) GCentral will ease the pain we all feel when attempting to contribute packages by creating new, un-gated, cloud based storage location that can house any package type (not source). MAYBE creating software to transport built packages from build machine to the new cloud storage location GCentral will ease pain we feel when attempting to co-develop code by Creating template projects for each of the major online SCC. (GitHub, etc). Coming pre-configured to build the package type of your choice and upload to the GCentral package server. GCentral will inspire confidence by Making any submitted package always available. Once submitted, a package cannot be deleted apart from a GCentral administrator. As a result, you can depend on a package without fear of it ever missing. Product pages per package designed to educate on the package and author. The above is a summary of the CLA summit presentation I gave (https://sites.google.com/gcentral.org/website/about-gcentral) The advent of the GitHub Package Registry is very interesting. I've reached out to GitHub to provide clarity on how extensible their framework is. At time 29:44 in the presentation Michael linked above the presenter says "We have a great extension framework for adding support for new registries, which will be opening up in the future". That MAY mean we can provide plugins for their registry to recognize NIPKGs, VIPs, GPKGs. And that may completely solve the "find/use" pain point i mention above... so long as the community is ok putting their packages in GitHub AND sacrificing confidence that the package will always be available to use or link against. In conclusion, GCentral's aim is to impose the least amount of infrastructure on a community member while enabling us to find/use, contribute, co-develop packages designed for reuse. GCentral will use already existing technologies to accomplish its goal and create new technologies where needed.
    1 point
  3. No Classic LabVIEW doesn't and it never will. It assumes a string to be in whatever encoding the current user session has. That's for most LabVIEW installations out there codepage 1252 (over 90% of LabVIEW installations run on Windows and most of them on Western Windows installations). When LabVIEW classic was developed (around end of the 80ies of the last century codepages was the best thing out there that could be used for different installations and Unicode didn't even exist. The first Unicode proposal is from 1988 and proposed a 16 bit Unicode alphabet. Microsoft was in fact an early adaptor and implemented it for its Windows NT system as 16 bit encoding based on this standard. Only in 1996 was Unicode 2.0 released which extended the Unicode character space to 21 bits. LabVIEW does support so called multibyte character encodings as used for many Asian codepages and on systems like Linux where nowadays UTF-8 (in principle also simply a multibyte encoding) is the standard user encoding it supports that too as this is transparent in the underlaying C runtime. Windows doesn't let you set your ANSI codepage to UTF-8 however, otherwise LabVIEW would use that too (although I would expect that there could be some artefacts somewhere from assumptions LabVIEW does when calling certain Windows APIs that might not match how Microsoft would have implemented the UTF-8 emulation for its ANSI codepage. By the time the Unicode standard was mature and the various implementations on the different platforms were more or less working LabVIEW's 8-bit character encoding based on the standard encoding was so deeply engrained that full support for Unicode had turned into a major project of its own. There were several internal projects to work towards that which eventually turned into a normally hidden Unicode feature that can be turned on through an INI token. The big problem with that was that the necessary changes touched just about every code in LabVIEW somehow and hence this Unicode feature is not always producing consistent results for every code path. Also there are many unsolved issues where the internal LabVIEW strings need to connect to external interfaces. Most instruments for instance won't understand UTF-8 in any way although that problem is one of the smaller ones as the used character set is usually strictly limited to ASCII 7-bit and there the UTF-8 standard is basically byte for byte compatible. So you can dig up the INI key and turn Unicode in LabVIEW on. It will give extra properties for all control elements to set them to use Unicode text interpretation for almost all text (sub)elements instead but the support doesn't for instance extend to paths and many other internal facilities unless the underlaying encoding is already set to UTF-8. Also strings in VIs while stored as UTF-8 are not flagged as such as non Unicode enabled LabVIEW versions couldn't read them, creating the same problem you have with VIs stored on a non Western codepage system and then trying to read them on a system with a different encoding. If Unicode support is an important feature for you, you will want to start to use LabVIEW NXG. And exactly because of the existence of LabVIEW NXG there will be no effort put in LabVIEW Classic to improve its Unicode support. To make it really work you would have to rewrite large parts of the LabVIEW code base substantially and that is exactly what one of the tasks for LabVIEW NXG was about.
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.