Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 09/15/2016 in all areas

  1. Actually btowc() is the single byte version of mbtowc(). Both are single char, but the first only works for single byte chars while the second will use as many bytes from a multi byte character sequence (MBCS) as are needed (and return an error if the byte stream in the mbcs input does start with an invalid byte code or is not long enough to describe a complete MBCS character for the current local. mbstowc() then works on whole MBCS strings while mbtowc() only processes a single character at a time. Please note that a character is not a single byte generally although here in the western hemisphere you get quite far with assuming that that is the case, although it's not quite safe to work from. Definitely on *nix systems which nowadays often use UTF-8 as default locale, you automatically end up with multibyte characters for those Umlaut, accent and other characters many European languages do use. Windows solves it differently by using codepages for the non-Unicode environment, which for western locales simply means that for extended characters the same byte means something different depending on the codepage you have configured. But even here you do need MBCS encoding for most non western languages anyhow UTF-8 to UTF-16 conversion is a fairly straightforward conversion, although the simple approach of some bitshifting only, could end up with invalid UTF-16 characters. A fully compliant conversion is somewhat tricky to get right for yourself as there are some corner cases that need to be taken care of.
    1 point
  2. Yes, iconv() is designed for charset conversion. Possible bear trap (I haven't used it myself): A quick Google session turned up a thread which suggests that there are multiple implementations of iconv out there, and they don't all behave the same. At the same time, I guess ICU would've been an overkill for simple charset conversion -- it's more of an internationalization library, which also takes care of timezones, formatting of dates (month first or day first?) and numbers (comma or period for separator?), locale-aware string comparisons, among others. Thinking about it some more, I believe @ShaunR does want charset conversion after all. This thread has identified 2 ways to do that on Linux: System encoding -> UTF-32 (via mbsrtowcs()) -> UTF-16 (via manual bit shifting) System encoding -> UTF-16 (via iconv()) Hence the rise of cross-platform libraries that behave the same on all supported platforms. Do you have the NI Developer Suite? My company does, and we serendipitously found out that LabVIEW for OS X (or macOS, as it's called nowadays) is part of the bundle. We simply wrote to enquire about getting a license, and NI kindly mailed us the installer disc just like that
    1 point
  3. DLL calls will block the thread they are executing in for the duration of the DLL function call. So yes if you do many different DLL calls in parallel that all take nasty long to execute, then you can of course use up all the preallocated threads in a LabVIEW execution system even if all the Call Library Nodes are configured to run in the calling thread. However if your DLL consists of many long running and synchronous calls you have already trouble before you get to that point, since your DLL is basically totally unusable from non-LabVIEW programming environments, which generally are not multi-threading out of the box without explicit measures taken by the application programmer. So I would guess that if you call such DLL functions you either didn't understand the proper programming model of that DLL, or took the super duper easy approach of only calling into the upper most, super easy dummy mode API that only exists to demo the capability of the DLL, not to use it for real! .Net has in addition to that some extra complications since LabVIEW has to provide a specific .Net context to run any .Net method call safely. So there it is quite easily possible to run into thread starvation situations if you tend to just call into the fullly synchonous Beginner API level of those .Net assemblies. But please note that this is not a limitation of LabVIEW, in fact if you call lengthy synchronous APIs in most other environments you run into serious problems at the second such call in parallel already if you don't explicitedly delegate those calls to other threads in your application (which of course have to be created explicitedly in the first place). The problem of LabVIEW is that it allows you to call easily more than one of these functions in parallel and it doesn't break down immediately, but only after you exhausted the preallocated threads in a specific execution system. By using lower level asynchonous APIs instead you can completely prevent these issues and do the arbitration on the LabVIEW cooperative multithreading level, at the cost of a somewhat more complex programming, but with proper library design that can be fully abstracted away into a LabVIEW VI library or class so that the end user only sees the API that you want them to use.
    1 point
  4. Well LabVIEW is in fact MBCS aware and as such using whatever MBCS standard is set on the system. That includes UTF-8 on Linux for instance. For most things that is pretty similar to ASCII, but not always. I don't believe it to be possible to set Windows to UTF-8 though as default MBCS. And no you would not use btowc and mbstrtowc together. Rather mbstrtowc does for a string, what btowc does for a single character (well more really mbtowc). btowc only works for single byte characters, which a LabVIEW string doesn't necessarily have to be (defininitely the asian language versions are mbcs for sure).
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.