Wow. Lots to respond to here.
You can also pass in floating point numbers, but be careful: LabVIEW uses the IEEE 754 round-to-nearest even integer rule when rounding numbers that end in .5. 102.5 rounds to 102. 70.5 rounds to 70. 71.5 rounds to 72.
Also, the behavior changed in LabVIEW 2010. To Upper Case, To Lower Case, Octal Digit?, Decimal Digit?, Hexadecimal Digit?, White Space? and Printable? all allow you to wire in a numeric. According to their documentation, those nodes evaluate the numeric as the ASCII character corresponding to the numeric value and in all other cases return FALSE.
In LV 2009, they return false for any value over 256.
In LV 2010 and later they return true for some values -- corresponding to the proper return value for the lowest 8 bits. E.g. LV 2009's White Space? primitive returns true for Int32(10) -- the line feed character -- and false for Int32(266). In LV 2010 it returns true for both.
I'm usually the loudest complainer about people treating strings as a byte arrays. In most programming languages, strings are not bytes arrays. In some languages, for legacy reasons you can treat them as byte arrays. In LabVIEW they are byte arrays. That results in interesting things with Multibyte Character Sets. I don't have any of the Asian localized LabVIEWs installed, but I think strings in those languages will have a string length reported as twice the number of characters that are actually there. When you have the unicode INI token set, this is definitely the case. Arguably that's wrong, but there's no way to go back and change it now.
I couldn't agree more to your first three paragraphs, but I've got to disagree about the difficulties.
The platforms which are supported is entirely related to the language and compiler for that language. Any time -- absolutely any time -- strings move from one memory space to another (shared memory, pipes, network, files on disk) the encoding of the text needs to be taken into account. If you know the encoding the other application is expecting, you can usually convert the text to that encoding. There are common methods for handling that text that doesn't convert (the source has a character that the destination doesn't have), but they're not perfect. The good thing is that it'd be extremely rare since just about everything NI produces right now "talks" ASCII or Windows-1252, so it's easy to make existing things keep working.
What you're talking about is how the characters are represented internally in common languages. In C/C++/C# on Windows they're UTF-16. In C/C# on Unix systems they're UTF-32. As long as you're in the same memory space you don't need to worry. There is no nightmare. Think of your strings as sequences of characters and it's a lot easier. As soon as you cross a memory boundary you need to be concerned. If you know what encoding the destination is expecting you encode your text into that. If/when LabVIEW gets Unicode support, that will be essential. Existing APIs will be modified to do as much of that for you as possible. Where it's not clear, you'll have the option and it'll default to Windows-1252 to mimic old behavior. Any modern protocol has its text encoding defined or the protocol allows using different encodings.