Jump to content

LabVIEW read Unicode INI file


Recommended Posts

  • 1 year later...

Hi there,

I'm having exactly the same problem as this - I have created a .ini file and saved it in notepad as unicode format.

When I try and read the file using the config file vis the section and key names aren't found. I've tried with and without the use of the str-utf16 vis that are included here. I can read a 'normal' ascii ini file fine.

Has anyone experienced anything similar?

Thanks,

Martin

Link to comment

Hi there,

I'm having exactly the same problem as this - I have created a .ini file and saved it in notepad as unicode format.

When I try and read the file using the config file vis the section and key names aren't found. I've tried with and without the use of the str-utf16 vis that are included here. I can read a 'normal' ascii ini file fine.

Has anyone experienced anything similar?

Thanks,

Martin

The config file vi's do not support Unicode . They use ascii operations internally for comparison (no string operations in LV currrently support Unicode)

You will have to read it as a standard text file then convert it with the tools above back to ascii.

Link to comment

Mmm, thanks for that. I was hoping to have an ini file with the required contents of string controls in different languages, like this:

[ctrlOne]

jp=種類の音を与えます

en=hello

It becomes quite tricky it seems... I'll keep trying & post back if I figure it out. I do have the modification to the labview .ini file that lets me view unicode characters in string controls so it is searching/extracting from the file that is the tricky part now.

Where there's a will there's a way and all that...

Link to comment

Mmm, thanks for that. I was hoping to have an ini file with the required contents of string controls in different languages, like this:

[ctrlOne]

jp=種類の音を与えます

en=hello

It becomes quite tricky it seems... I'll keep trying & post back if I figure it out. I do have the modification to the labview .ini file that lets me view unicode characters in string controls so it is searching/extracting from the file that is the tricky part now.

Where there's a will there's a way and all that...

You are better off having separate files for each language. Then you can have an ascii key (which you can search) and place the real string that into the control. It also enables easy switching just by specifying a file name.

eg.

msg_1=種類の音を与えます

You could take a look at Passa Mak. It doesn't support Unicode, but it solves a lot of the problems you will come across.

Alternatively you could use a database so that you can use sql queries instead of string functions to get your strings (its labviews string functions/parsing that cause most of the problems).

Edited by ShaunR
Link to comment

Mmm, thanks for that. I was hoping to have an ini file with the required contents of string controls in different languages, like this:

[ctrlOne]

jp=種類の音を与えます

en=hello

It becomes quite tricky it seems... I'll keep trying & post back if I figure it out. I do have the modification to the labview .ini file that lets me view unicode characters in string controls so it is searching/extracting from the file that is the tricky part now.

Where there's a will there's a way and all that...

I haven't tried it but there might be another way of going about it with the import and export string commands. There's an app reference invoke node that can batch all the different VIs together into one text file.

http://zone.ni.com/devzone/cda/tut/p/id/3603

Link to comment

I haven't tried it but there might be another way of going about it with the import and export string commands. There's an app reference invoke node that can batch all the different VIs together into one text file.

http://zone.ni.com/d...a/tut/p/id/3603

Cool, thanks for that. That method is working really nicely with ASCII encoding, the unicode seems a bit tricky still.

Link to comment

Mmm, thanks for that. I was hoping to have an ini file with the required contents of string controls in different languages, like this:

[ctrlOne]

jp=種類の音を与えます

en=hello

It becomes quite tricky it seems... I'll keep trying & post back if I figure it out. I do have the modification to the labview .ini file that lets me view unicode characters in string controls so it is searching/extracting from the file that is the tricky part now.

I've done something similar in the past. I treat the file as a whole as ASCII, but the value of each of the keys may be ASCII or Unicode. Then in LV I interpret the key values accordingly, converting them from a string to a Unicode string if necessary. I've posted a bunch of Unicode tools including file I/O examples on NI Community, that may help you out.

http://decibel.ni.co.../docs/DOC-10153

Link to comment

I've done something similar in the past. I treat the file as a whole as ASCII, but the value of each of the keys may be ASCII or Unicode. Then in LV I interpret the key values accordingly, converting them from a string to a Unicode string if necessary. I've posted a bunch of Unicode tools including file I/O examples on NI Community, that may help you out.

http://decibel.ni.co.../docs/DOC-10153

Why are they password protected?ph34r.gif

Will we be seeing proper unicode support soon wink.gif

Edited by ShaunR
Link to comment

Why are they password protected?ph34r.gif

Likely because they make use of the undocumented UTF 16 nodes that are in LabVIEW since about 8.6. And these nodes are likely undocumented because NI is still trying to figure out how to expose that functionality to the LabVIEW programmer without bothering him with underlying Unicode difficulties including but certainly not limited to UTF16 on Windows v. UTF32 on anything else (except those platforms like embedded RT targets were UTF support usually is not even present, which is an extra stumble block to make generic UTF LabVIEW nodes]). Of course they can include the IBM ICU library or something along that line but that is a noticable extra size for an embedded system.

Will we be seeing proper unicode support soon wink.gif

It all depends what you consider as "proper". Those nodes will likely make it into one of the next LabVIEW versions. However to support Unicode in every place including the user interface (note LabVIEW supports proper multibyte encoding already there) will be likely an exercise with many pitfalls, resulting in an experience that will not work right in the first few versions, and might even cause troubles in non unicode use cases (which is likely the main reason they haven't really pushed for it yet). Imagine your normal UI's suddenly starting to misbehave because the unicode support messed something up, and yes that is a likely scenario, since international character encoding with multibyte and unicode is such a messy thing.

Link to comment

Likely because they make use of the undocumented UTF 16 nodes that are in LabVIEW since about 8.6. And these nodes are likely undocumented because NI is still trying to figure out how to expose that functionality to the LabVIEW programmer without bothering him with underlying Unicode difficulties including but certainly not limited to UTF16 on Windows v. UTF32 on anything else (except those platforms like embedded RT targets were UTF support usually is not even present, which is an extra stumble block to make generic UTF LabVIEW nodes]). Of course they can include the IBM ICU library or something along that line but that is a noticable extra size for an embedded system.

Ooooh. where are they?

It all depends what you consider as "proper". Those nodes will likely make it into one of the next LabVIEW versions. However to support Unicode in every place including the user interface (note LabVIEW supports proper multibyte encoding already there) will be likely an exercise with many pitfalls, resulting in an experience that will not work right in the first few versions, and might even cause troubles in non unicode use cases (which is likely the main reason they haven't really pushed for it yet). Imagine your normal UI's suddenly starting to misbehave because the unicode support messed something up, and yes that is a likely scenario, since international character encoding with multibyte and unicode is such a messy thing.

Indeed. I think most people (including myself) generally think that unicode support = any language support. Although it's a bit of a leap. If the goal is simply to make multiligual labview interfaces then unicode can be ignored completely in favour of UTF8 which isn't code-page dependent (I've been playing with this recently and wrote my own to detect and convert to the LV unicode so you don't get all the spaces). This would mean the old programs would still function correctly (in theory I think, but still playing).

Edited by ShaunR
Link to comment

Ooooh. where are they?

There used to be a library somewhere on the dark side that contained them. It was very much like my unicode.llb that I posted years ago and which called the Windows WideCharToMultibyte and friends APIs to do the conversion but also had extra VIs that were using those nodes. And for some reasons there was no password, eventhough they usually protect such undocumented functions strictly.

I'll try to see if I can find something either on the fora or somewhere on my HD.

Otherwise, using Scripting possibly together with one of the secret INI keys allows one to create LabVIEW nodes too, and in the list of nodes these two show up too.

Link to comment

There used to be a library somewhere on the dark side that contained them. It was very much like my unicode.llb that I posted years ago and which called the Windows WideCharToMultibyte and friends APIs to do the conversion but also had extra VIs that were using those nodes. And for some reasons there was no password, eventhough they usually protect such undocumented functions strictly.

I'll try to see if I can find something either on the fora or somewhere on my HD.

Otherwise, using Scripting possibly together with one of the secret INI keys allows one to create LabVIEW nodes too, and in the list of nodes these two show up too.

I already have my own vis that convert using the windows api calls. I was kind-a hoping they were more than that sad.gif. I originally looked at it all when I wrote PassaMak, but decided to release it without Unicode support (using the api calls) to maintain cross-platform. Additionally I was put off by the hassles with special ini settings, the pain of handling standard ASCII and a rather woolly dependency on code pages - it seemed a one OR the other choice and not guaranteed to work in all cases.

As with most of my stuff, I get to re-visit periodically and recently started to look again with a view to using UTF-8 which has a the capability of identifying ASCII and Unicode chars (regardless of code pages) which should make it fairly bulletproof and boil down to basically inserting bytes (for the ASCII chars) if the ini is set and not if it isn't. Well that's the theory at least, and so far, so good. Although I'm not sure what LV will do with 3 and 4 byte chars and therefore what to do about it. That's the next step when I get time.

Link to comment

I already have my own vis that convert using the windows api calls. I was kind-a hoping they were more than that sad.gif. I originally looked at it all when I wrote PassaMak, but decided to release it without Unicode support (using the api calls) to maintain cross-platform. Additionally I was put off by the hassles with special ini settings, the pain of handling standard ASCII and a rather woolly dependency on code pages - it seemed a one OR the other choice and not guaranteed to work in all cases.

As with most of my stuff, I get to re-visit periodically and recently started to look again with a view to using UTF-8 which has a the capability of identifying ASCII and Unicode chars (regardless of code pages) which should make it fairly bulletproof and boil down to basically inserting bytes (for the ASCII chars) if the ini is set and not if it isn't. Well that's the theory at least, and so far, so good. Although I'm not sure what LV will do with 3 and 4 byte chars and therefore what to do about it. That's the next step when I get time.

While the nodes I spoke about probably make calls to the Windows API functions under Windows, they are native nodes (light yellow) and supposedly call on other platforms the according platform API for dealing with Unicode (UTF8 I believe) to ANSI and v.v. The only platforms where I'm pretty sure they either won't even load into or if they do will likely be NOPs are some of the RT and embedded platforms

Possible fun can arise out of the situation that the Unicode tables used on Windows are not exactly the same as on other platforms, since Windows has slightly diverged from the current Unicode tables. This is mostly apparent in collation which influences things like sort order of characters etc. but might be not a problem in the pure conversion. This however makes one more difficulty with full LabVIEW support visible. It's not just about displaying and storing Unicode strings, UTF8 or otherwise, but also about many internal functions such as sort, search etc. which will have to have proper Unicode support too, and because of the differences in Unicode tables would either end up to have slightly different behavior on different platforms or they would need to incorporate their own full blown Unicode support into LabVIEW such as the ICU library to make sure all LabVIEW versions behave the same, but that would make them behave differently to the native libraries on some systems.

Link to comment

While the nodes I spoke about probably make calls to the Windows API functions under Windows, they are native nodes (light yellow) and supposedly call on other platforms the according platform API for dealing with Unicode (UTF8 I believe) to ANSI and v.v. The only platforms where I'm pretty sure they either won't even load into or if they do will likely be NOPs are some of the RT and embedded platforms

Possible fun can arise out of the situation that the Unicode tables used on Windows are not exactly the same as on other platforms, since Windows has slightly diverged from the current Unicode tables. This is mostly apparent in collation which influences things like sort order of characters etc. but might be not a problem in the pure conversion. This however makes one more difficulty with full LabVIEW support visible. It's not just about displaying and storing Unicode strings, UTF8 or otherwise, but also about many internal functions such as sort, search etc. which will have to have proper Unicode support too, and because of the differences in Unicode tables would either end up to have slightly different behavior on different platforms or they would need to incorporate their own full blown Unicode support into LabVIEW such as the ICU library to make sure all LabVIEW versions behave the same, but that would make them behave differently on some systems to the native libraries.

Indeed. (to all of it). But its rather a must now as opposed to, say 5 years ago. Most other high level languages now have full support (even Delphi finally...lol). I haven't been critical about this so far, because NI came out with x64. As a choice of x64 or Unicode, My preference was the former and I appreciate the huge amount of effort that must have been. But I'd really like to at least see something on the roadmap.

Are these the VIs you are talking about?

These I've tried. They are good for getting things in and out of labview (e.g files or internet) but no good for display on the UI. For that the ASCII needs to be converted to UCS-2 BE and the Unicode needs remain as it is, ( UTF8 doesn't cater for that). And that must only happen if the ini switch is in otherwise it must be straight UTF8.

The beauty of UTF8 is that it's transparent for ASCII, therefore inbuilt LV functions work fine. I use a key as a lookup for the display string, which is ok as long as it is an ASCII string. I can live with that biggrin.gif The real problem is that once the ini setting is set (or a control is set to Force Unicode after it is set) it cannot be switched back without exiting labview or recreating the control. So on-the fly switching is only viable if, when it is set, ASCII can be converted. Unless you can think of a better way?

Link to comment

Are these the VIs you are talking about?

Yep, that are the nodes (note: not VI's). I'm aware that they won't help with UI display but only in reading and writing UTF8 files or any other UTF8 data stream in or out of LabVIEW. Display is quite a different beast and I'm sure there are some people in the LabVIEW development departement, biting nails and ripping out their hair, trying to get that working fine.:P

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.