Jump to content

OpenG "Variant Data" palette not supported in NXG3.0 ?


Recommended Posts

Hi,

We currently encounter a serious issue in our company because we use some functions of the OpenG Variant Data palette in our own toolkits (Get Cluster Name, Get Cluster Element by Names, Get Cluster Element Names, Get Strings from Enum TD...).

After some researches, it seems that it is linked to the new way LabVIEW NXG is encoding strings compared to LabVIEW (unicode instead of ASCII). Therefore, some functions to cast bytes to string or other functions as string length for example cannot be used as before with LabVIEW NXG.

See this link for more information https://forums.ni.com/t5/NI-Blog/Designing-LabVIEW-NXG-How-Unicode-Benefits-You/ba-p/3886228?profile.language=fr

So, we think that this palette disappeared because of this but we don't understand why it was still here on LabVIEW NXG 2.0 and not anymore in LabVIEW NXG 3.0.

 

Anyway, how do you think we should handle that? Is there any workarounds about that?

Thank you for your help!

Link to comment
23 hours ago, ShaunR said:

Is this all strings and if so how do functions such as TCP/UDP Read/Write  and Byte Array to String/String to Byte Array operate?

network functions have always accepted byte arrays and strings, I dont think this has changed: https://www.ni.com/documentation/en/labview/latest/node-ref/tcp-read/

The string/byte array conversions are in the original link under "working with different encodings" -- the node still exists, you just have to know whether the string is "extended ascii" or unicode:
https://www.ni.com/documentation/en/labview/latest/node-ref/byte-array-to-string/
https://www.ni.com/documentation/en/labview/latest/node-ref/string-to-byte-array/

 

However I dont think this is the problem. I think Lois is conflating item (A) which is that NXG has changed the 'default' encoding for the string data type to be UTF8 with item (B) which is that NXG has removed a lot of meta-programming facilities including (I assume) a dependency of the OpenG data type parsing functionality and (I know) a dependency of every variant to X (xml, json, ini) toolset ever made for this language. It looks like at least some of the necessary components for the openg library do exist, though.

To me (A) is a positive change, (B) is a breaking change that makes the language unusable for me until its resolved, and it sounds like at present NI has 0% interest in doing so.

Edited by smithd
  • Thanks 1
Link to comment
On 4/27/2019 at 2:57 AM, smithd said:

To me (A) is a positive change, (B) is a breaking change that makes the language unusable for me until its resolved, and it sounds like at present NI has 0% interest in doing so.

Thanks for doing the leg-work. :worshippy:

A) Is a good thing but I would have preferred ASCII as the default for campitibility. I wouldn't relish going through an already working application and having to explicitly change all the nodes. The only time you would know if you'd missed a node to change is when data intermittently gives incorrect results.

B) I don't  really care about since I have only used it once or twice mainly for JSON.

The other question is; can it actually display UTF8/16? We can already deal with UTF8 with the primitive. What we can't do, currently, is display it on non, language specific OSs.

Edited by ShaunR
Link to comment
1 hour ago, ShaunR said:

If you want to display Japanese, you cannot on an English version of Windows - you just get question marks.

Not true at all. I have an application I developed that includes on the fly translation to pretty much any predefined language using UTF8 string lookup files. It works fine on my English language Windows (developed on Win 7, works fine on Win 10).

Obviously you need to have the correct language packs installed in order to get the fonts.

Picture is a snippet from one of the translation files.

Capture.PNG

Edited by Neil Pate
Link to comment
7 hours ago, Neil Pate said:

Not true at all. I have an application I developed that includes on the fly translation to pretty much any predefined language using UTF8 string lookup files. It works fine on my English language Windows (developed on Win 7, works fine on Win 10).

Obviously you need to have the correct language packs installed in order to get the fonts.

Picture is a snippet from one of the translation files.

Capture.PNG

You can get it to display Japanese under certain conditions (ini setting and Force Uncode Text on the control) but it is permanently "stuck" so you end up with spaces between chars for ascii. It also requires converting the to UTF16-LE via windows calls (hence you need the code language packs) so it is not cross platform and it doesn't work at all with filename controls or file functions.

Edited by ShaunR
Link to comment
On 4/27/2019 at 8:58 PM, ShaunR said:

B) I don't  really care about since I have only used it once or twice mainly for JSON. 

The issue drjd/wiebe identified will interfere with any application where you want to generically accept a data element and inspect the names contained elements. I think that is mostly the serialization libs. I'm actually very surprised this doesn't bother you, I'd have assumed you would use these features.

Edited by smithd
Link to comment
11 hours ago, ShaunR said:

You can get it to display Japanese under certain conditions (ini setting and Force Uncode Text on the control) but it is permanently "stuck" so you end up with spaces between chars for ascii. 

I have blocked out those painful memories...I did eventually get something working which is robust, but if I recall it was pretty ropey until it worked properly (lots of weirdness...).

Link to comment
11 hours ago, smithd said:

I'm actually very surprised this doesn't bother you, I'd have assumed you would use these features.

For JSON I use SQLite nowadays since all my apps have it (and it's searchable with SQL). Other than that. I don't use OpenG at all and, as far as I can remember, JSON was the only reason I used type functions. I'm using 2009 so it's not using the newer ones and the only thing I can think of could cause a problem is if the class names would be UTF16. I could easily work around that, though.

I think anyone that has used variant look-ups as dynamic fast-tables would be shafted, though.

Link to comment
4 hours ago, Neil Pate said:

I have blocked out those painful memories...I did eventually get something working which is robust, but if I recall it was pretty ropey until it worked properly (lots of weirdness...).

When I wrote Passa Mak I went through all this. It replaces Captions and Tip Strips and anything else that has a text property on-the-fly but  the inevitable support required to allow people to use languages like Japanese or Chinese was just rediculous-so I didn't.

The real problem was filenames, though. How did you resolve that?

Link to comment
4 hours ago, ShaunR said:

I think anyone that has used variant look-ups as dynamic fast-tables would be shafted, though.

I'm not really sure! That assumes that you would use key-values that are not representable in 7 bit ASCI. Definitely possible if they are defined by the operator through the UI rather than programmatically, but even then I'm not sure I can easily see the problem there. Things get wonky when you start to mix and match functionality between UTF and non-UTF aware systems but as long as they stay isolated from each other it shouldn't necessarily be a problem.

That's why I never really bothered with the UTF-8 functionality in LabVIEW. It's not fully transparently implemented and given the legacy of various LabVIEW parts I'm very much convinced that there is NO possibility to implement it in a way that will not break backwards compatibility in several places quite badly. That's the main reason it never was released as a feature, since they could probably have thrown several dozen programmers at it and still not have a feature that would simply work without badly affecting the working of applications that are upgraded to the new version. The unoffical UTF-8 support in LabVIEW was a proof of concept project (most likely driven by one or two enthusiastic programmers in the LabVIEW team) and it showed that implementing it in a clean and unobstructive way is basically impossible, so there wasn't put anymore effort into it to make it a full feature that could be released.

The problem starts at such basic issues like that many LabVIEW functions use exclusivley strings for data elements that are actually byte streams rather than text strings. The Flatten to String or Unflatten from String functions are the most notorious of them. They never ever should have been implemented using strings but byte arrays instead. It goes further with functions like the TCP Read and Write nodes. While the Write node does actually accept byte arrays for several versions now there is no way to change the TCP Read to return Byte arrays too. Same for File Read and Write and VISA Read and Write.

Ultimately all those functions (except the Flatten and Unflatten) should probably have variants that allow to use either binary byte arrays and for backwards compatibiliy ASCI strings. Then there should be a also a new string datatype that carries an inherent encoding attribute with it and last but not least a library of nodes to convert various encodings from one to the other including into and from binary byte arrays. Extra bonus points for letting those nodes accept this new string type too and allow the configuration of an automatic conversion to a specific encoding when writing out or reading from.

This would solve the problem of being able to write encoding aware code, it still leaves uncountable places in the LabVIEW UI and other internal places including things like the resource format of most of its files that would also need to be improved to allow for full UTF-8 support. And that part is an almost unmanagable construction site.

Edited by Rolf Kalbermatter
Link to comment
4 hours ago, Rolf Kalbermatter said:

The problem starts at such basic issues like that many LabVIEW functions use exclusivley strings for data elements that are actually byte streams rather than text strings.

They seem to have that covered. I would be more worried that because it is .NET; the default is UFT16-LE so any strings from property nodes could be that instead of ASCII. Things like the version/serial numbers, label text, class names etc. If that were the case, I could see a lot of utilities breaking.

Link to comment
7 hours ago, ShaunR said:

When I wrote Passa Mak I went through all this. It replaces Captions and Tip Strips and anything else that has a text property on-the-fly but  the inevitable support required to allow people to use languages like Japanese or Chinese was just rediculous-so I didn't.

The real problem was filenames, though. How did you resolve that?

I did not try and translate any filenames. Only strings, numerics and booleans.  

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.