Jump to content

Get ASCII characters as strings?


APi

Recommended Posts

I'm trying to display some data with their corresponding units in a string indicator. The units may contain some special characters such as omega for ohm or mu for micro, mixed with regular latin characters so I cannot simply change the indicator font to symbol. I'm aware that it should be possible to get ASCII charaters as strings using the type cast vi but apparently the results are somewhat platform-dependent. When I try this, I get a ANSI character instead of an ASCII character! This would be fine but the ANSI character set does not contain some of the symbols (i.e. the omega) I need. Furthermore, if the characters displayed are somehow dependent on OS settings I cannot use this method because the results may vary on different systems. My questions are:

Is there any way to change the character set that the type cast vi uses? Ideally this should be independent of OS settings.

Does anyone know techniques for displaying special characters and regular characters in a string?

I would appreciate your help!

Link to comment

Is there any way to change the character set that the type cast vi uses? Ideally this should be independent of OS settings.

You're misunderstanding what Type Cast does. Type Cast knows nothing about character sets - it simply reinterprets a sequence of bytes as something else. Consequently, it's not used exclusively for strings. For example, an array of 4 U8 values can be reinterpreted as a single U32 value.

Does anyone know techniques for displaying special characters and regular characters in a string?

You can programmatically change the font using the Text.SelStart, Text.SelEnd, and Text.FontName properties:

font change.vi

Link to comment

Search LAVA for posts related to unicode, you should be able to gain some insight from previous discussions on character sets. At this point in time, it's not a terribly simple process.

Perhaps post a snippet of what you're doing so far?

Search LAVA for posts related to unicode, you should be able to gain some insight from previous discussions on character sets. At this point in time, it's not a terribly simple process.

Perhaps post a snippet of what you're doing so far?

Link to comment

Thank you both for the replies, I didn't realize that you can change the font in this way. Still, it might be more convenient in some cases to have special characters as strings as it might get tedious to dynamically change the font of many indicators. According to this:

http://digital.ni.com/public.nsf/allkb/77C8F61D36F5A23086256634005ACB38

the type cast vi is supposed to interpret a U8 integer as an ASCII character. If I understood correctly, wiring a 234 U8 integer to type cast should produce an ASCII character corresponding to that code (Ω). However, in my system I get ê which is 234 in many unicode scripts. I'll have to take a look at the unicode topics now. This is slightly irritating because I can produce all of the special characters I need, except for Ω! :)

Link to comment

The document you linked to does not imply that Type Cast is supposed to interpret a U8 integer as an ASCII character. It does no such thing. That's the first assumption you need to let go of. Because of the way Type Cast works you can use it to convert a U8 to an ASCII character. But that's only because the ASCII codes were originally based on single byte values. Second, the document that you linked to only applies to the "standard" ASCII set (i.e., to values less than 128). For values of 128 or greater you do not get the extended ASCII set. LabVIEW displays characters on string-based indicators using Multibyte Character Strings. You can read more here: https://decibel.ni.com/content/docs/DOC-10153

Link to comment
  • 2 weeks later...

Omega is NOT ASCII code 234. It may be so in one or several specific codepages but Windows knows literally dozens of codepages. They usually (not always) produce the same glyph for the ASCII codes 1 to 127 but have wildly varying glyphs for the codes 128 and higher. And different fonts support different codepages but are not equivalent to them.

There are two ways to deal with this to be able to display more than 128 different character glyphs at the same time. Traditionally older systems used a Multibyte encoding scheme which is what LabVIEW uses too,. The second is Unicode, which is nowadays kind of common as far as platform support goes, but support on application level varies wildly with many applications still not being able to deal with Unicode properly. Also Unicode has some issues as far as collation tables and such go. There is the official standard from the Unicode consortium and the defacto standard as implemented by Microsoft in Windows. They differ in subtle but sometimes important ways, to make it very hard to write a multilanguage application that uses the same code base for Windows and non-Windows systems.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.