-
Posts
3,871 -
Joined
-
Last visited
-
Days Won
262
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Not GDI+ but GDI. And it's not 10 years later but 20 years later. But you have to consider the implications. Anything 2D based in LabVIEW is done using a 2D internal API that maps to whatever drawing primitive API is available on the platform. For MacOS that was Quickdraw (and probably still is, as rewriting the code to interface to Quartz (CoreGraphics) is anything but a trivial job, for Windows that was and is GDI, and for Unix systems that is X Windows. All of them are the basic drawing APIs most applications are using even nowadays to perform UI drawing, if they don't only use standard system widgets and don't draw anything themselves. OpenGL is almost only used in applications with 3D output rendering, and GDI+ is by most applications only indirectly used since the .Net UI framework builds on that. There are very few applications that make actually directly use of any of the features that GDI+ offers. And DirectX is an API that is almost exclusively used by game engines if they don't want to have multiplatform support, otherwise they would likely go to OpenGL instead. For most anything in most applications, the basic GDI, Quickdraw, Quartz and X Windows systems are actually more than fast enough and LabVIEW makes no real difference there. The only areas where it would benefit somewhat from a faster API are possibly graphs (although I would not expect a huge speed improvement there) and the 2D picture control. But you have to weight always the effort and the benefit, as well as the possible outfall because of incompatibilities. Rewriting the 2D graphic primitive system to take advantage of newer technologies just for the Picture Control would be way to costly, changing it for anything 2D related would not bring much benefit speed wise, but be a resource intense project and cause likely all kinds of problems in terms of many bugs introduced but also much more subtle issues, such as small but sometimes also more visible differences in the visual appearance of the entire UI rendering. In short a lot of work for little visible benefit and with the likely chance to get criticisme for pixel to pixel inaccuracies in comparison to the previous implementation. Rewriting a perfectly working subsystem in an existing app, is always a very risky business and LabVIEW is no difference there. The Picture Control which is the only component in LabVIEW that would probably gain much from that is not a core component as it is only used by a few. Now there might be a rewrite at some point especially when retina type displays get the new standard and it gets more and more important to have subpixel rendering inside the app. But LabVIEW isn't a prime candidate for that and the LabVIEW developers won't take on such a project just for fun and to be able to brag about it! Also retina type support under Windows isn't really ready yet and probably quite some time away on Unix.
-
Strictly speaking, GPL indeed could be a problem as it is thought by many that the linking clause in the GPL license applies in fact also to dynamic linking, which the ODBC manager has to do no matter what, if he wants to use any ODBC driver. This is exactly the reason why LGPL was invented, which maintains all the protection of GPL on the library code but allows linking the library with other non (L)GPL based code without breaking the license. Again here some feel that LGPL only really allows dynamic linking and static linking is not really proper. I personally tend to agree with this, mostly to be on the safe side. However considering that myODBC is a shared library and the non-GPL ODBC manager in Windows will have to load it and link to it dynamically in ALL cases, the question rises of course what use the myODBC driver would have on Windows if the GPL license would not allow it to be loaded and dynamically linked to by non GPL software. So either the GPL license has to be interpreted at least in this case to intend to allow dynamic linking or the choice of the GPL license instead of LGPL by the myODBC developers is simply stupid. Unfortunately I can't find any specifics about the license of the Connector/ODBC component, just that MySQL itself is GPL, which indeed would make one assume that Connector/ODBC falls under the same license. In any case the LabVIEW Database Toolkit has no direct relation to the myODBC driver, as there lies in fact at least the Windows ODBC manager and then the Windows ODBC-to-ADO bridge in-between. So if loading and using the driver in any Windows ODBC based application is to be considered alright, then using it with the Database Toolkit has to be alright too.
-
Actually most vector based formats are somehow build like the LabVIEW Picture Control stream. Not sure about SVG but WMF/EMF or the Macintosh PICT format are like that. Most likely the performance of the Picture Control comes partly during building of the picture stream with all those Append String nodes that creates a rather bad performance and then in the rendering because it is build on LabVIEW internal routines that then map more or likely less directly to the platform API. Also the fact that it is likely mapping into good old GDI instructions under Windows rather than going into a more performing interface like GDI+ or OpenGL. But GDI+ or OpenGL was not an option in LabVIEW 2.5 when this was invented and porting it later to use these new APIs would likely cause many compatibility issues that could introduce breaking issues to existing applications. The 3D picture control (the built in one, not the ActiveX control) should be much better in terms of performance although it is not ideal for 2D vector based images but specifically meant for 3D visualization.
-
Actually many Toolkits work just fine under the Mac. It's mostly the installers that support only Windows. Installing the Toolkit on a Windows system and copying it over to a Mac works for many of them, as long as they are all VI based and don't contain external code in the form of DLLs.
-
The C runtime is mostly backwards compatible. There have been hickups both with the MS VC C runtime and also the GCC clib in the past. MS "solved" the problem by forcing the MS C runtime to be a SxS dependendy, which will load whatever version of the C runtime library that was used to compile the executable module (DLL and EXE) and in that way created a new huge problem. If a module was compiled in a different VC version, it will load a different C runtime version into the process and havoc begins, if you start passing any C runtime objects between those modules. This includes heap pointers that can not be freed in a different C runtime library than they were created, and shows that even when using just the C runtime, you have to make sure to allocate and destroy memory objects always from the same C runtime scope. But also more fundamental things like file descriptors are a problem. Basically anything that has a filedescriptor in the function interface will completely fail if made to operate on objects that were created in a different C runtime library. Also exception handling is another area that changes with every Visual C version significantly and can have nasty effects if it gets mixed. This is all rather painful and also seems mostly unnecessary if you think about the fact that almost everything in the MS C runtime library ultimately maps to WinAPI calls at some point. Also for itself they avoided this problem by making all of their Windows tools link to the original msvcrt.dll, but declaring that private after Visual C 6. The only way to make use of msvcrt.dll instead of msvcrtxx.dll is by either using Visual C 6 or older WinDDK compiler toolchains.
-
Thanks for clarifying Greg. I was pretty sure that this was the case like this, but started to wonder after Shauns reply. Other than that I do however fully agree with Shaun. DLLs are not evil but more complicated in terms of maintenance and distribution since you need one DLL/shared library for every potential target OS, and if the DLL is a LabVIEW DLL it gets even a little more involved. For that reason distributing LabVIEW created DLLs for LabVIEW users is mostly a pain in the ass and will likely annoy the LabVIEW user too, as he can't look into the driver and debug it if the need should arise. Distributing a LabVIEW DLL driver for non-LabVIEW users is however a possibility although the fact that one needs to have the correct LabVIEW runtime installed is probably going to cause some sputtering by some C/C++/whatever users. Hmm, could you clarify a bit what you were trying to do there? It's not easy to guess what you try to demonstrate from this message box and that makes it very hard to come up with some ideas to explain the behavior you see. To me this looks like a LabVIEW 2012 generated shared library that you try to load on a computer that does not have the 2012 runtime engine installed.
-
I also read some criticisme about OO. Not that it says functional programming is better than OO programming. Neither is better than the other in general AFAIC, but many OO fanatics tend to pull the OO sword for everything even if a simple functional approach would be much easier and quicker. OO has its merits but making functional programming more or less impossible like Java actually does is simply taking the idea over the top. And that are not the only issues I have with Java, but I haven't thrown it away yet.
-
There is no free ride! A DLL/shared library is always platform specific and that means for CPU architecture, OS and bitness. All three have to match for the shared library to be even loadable. That is why distributing a LabVIEW written driver as shared library is probably one of the worser ideas one can have. The same effect you get when distributing VI's without diagram. Because that is what basically is inside the shared library. And no unfortunately you can't leave the diagrams intact inside the DLL and hope that it will still work when loaded into a different version of LabVIEW eventhough the bitness or OS doesn't work. The DLL still executes in the context of the runtime engine which has no compiler or even the possibility to load the diagram into memory. The most user friendly approach is to distribute the instrument driver as LabVIEW source (I personally consider instrument drivers distributed as DLL/shared library at most as a compromise but loath it) and create a shared library from it for those non-LabVIEW users and worry about OS/bitness version and such as requests come in. There won't be any way around creating special versions of your test program that access the DLL instead of the native driver, for testing the shared library version. The upside of this is that debugging of any driver related issues during testing is MUCH easier when you leave everything as diagram, and only check after the final build that it also works as DLL. Fortunately the only one that can not be created by LabVIEW is the VxWorks shared library! But I really echo Shauns comments. If you have any chance to avoid the shared library for your LabVIEW users, you save yourself a lot of pain and sweat and make your LabVIEW users much happier too. Building multiple shared libraries after every modification of your LabVIEW code is no fun at all. And LabVIEW only creates shared libraries for the platform it is running on, so you need to have as many (virtual) OS/LabVIEW installations as you want to support platforms for, and test them each and every one as well after each build.
-
In the scenario of this thread it's not an option. But for a C DLL to be called by a LabVIEW program it is for anyone useful who gets to use my DLL in LabVIEW, ideally with an accompanying LabVIEW VI library! Also the Pointer variant while indeed a possible option is in LabVIEW in fact seldom significantly faster and quite often slower. If there is any chance for the caller to know beforehand the size of the buffer (maybe by calling a second API or by defining anyhow what data needs to be returned: data acquisition for instance) the use of caller allocated buffers passed as C pointers into the function is at least as fast or faster, since the DLL can directly copy the data into the buffer. With the DLL allocated buffer you end up in most cases with a double data copy once for the DLL when it allocates the buffer and copies its data into it and once with the MoveBlock() in the caller. So claiming that it is always faster is not correct. At least inside LabVIEW it is usually about the same speed, only with the data copy happening in one case inside the DLL and in the other in the caller. Only when the DLL can only determine the buffer size during the actual retrieval itself can it be an advantage to use DLL allocated buffers as it avoids the problem of having to potentially allocate a hugely over-sized buffer. If the potential user of the DLL is a C program then this is different. In that case returning DLL allocated buffers is indeed usually faster as you do not need the extra MoveBlock()/memcpy() call afterwards. But it's in any case a disadvantage that the API gets complicated to a level that is stretching the knowledge limits of many potential DLL users, and not just for LabVIEW Call Library Node users, as it is non-standard and also creates easy to introduce bugs in respect to resource management, because of unclear situations who needs to deallocate the buffers eventually. The returned pointer could also be a statically allocated buffer inside the DLL(often the case for strings) that would be fatal to try to free(). And another issue is that your library absolutely needs to provide an according dispose() method, as the free() function the caller might be linking too, might operate on a different heap than the malloc() function the DLL used. The only real speed improvement is when the data producing entity directly can create the managed buffers the final caller will eventually use. But C pointers don't count as such in LabVIEW since you have to do the MoveBlock() trick eventually. One more comment in respect to the calling convention. If you ever intend to create the shared library also for non-Windows platforms, C calling convention is really the only option. If you start out with stdcall now and eventually decide to create a Linux or MacOS version of your shared library you would have to either distribute different VI libraries for Windows and non-Windows platforms, or bite the bullet and change the entire VI library to C calling convention for all users, likely introducing lots of crash problems for users who find it normal to grab a DLL copy from somewhere and copy it into the system or application directory to "fix" all kind of real and imagined problems. At least there are tons of questionable links in the top google hits about DLL downloads to fix and improve the system, whenever I google for a particular DLL name. That and so called system performance scanners who offer to scan my system for DLL problems! Never tried them but I would suspect 99% of them doing nothing really useful, either containing viruses and troyans or trying to scare the user into downloading the "improved" program that can also fix the many "found" issues, of course for an obolus in the form of hard valuta.
-
Actually using fully managed mode is even faster as it will often avoid the additional memory copy involved with the MoveBlock() call.But at least in C and C++ it's only an option if you can control both the caller and callee, or in the case of a LabVIEW caller, know exactly how to use the LabVIEW memory manager functions in the shared library.
-
Unless your string can have embedded NULL bytes that should not terminate it, there should be no need to pass string parameters as byte arrays. In fact when you configure a CLN parameter to be a C string pointer LabVIEW will on return explicitedly scan the string for a NULL byte (unless you configured it to be constant) and terminate it there. This is usually highly desirable for true strings. If the buffer is however binary data that can have 0 bytes, you should indeed pass it as byte array pointer to avoid having LabVIEW scan it on return for a NULL character.
-
No you can't avoid having to preallocate the strings and arrays by the caller if you want to make everything like you imagine. There is no way the LabVIEW DLL can allocate C string or array pointers and return them to the caller, without limiting the caller to only use very specific deallocator functions provided by your library too, or to never link with a different C runtime library than the one you used (that doesn't just mean a specific type of compiler but even a spedific C runtime version, even down to the last version digit when using side by side (SxS) libraries, which all Visual C versions since 2005 do). This is the old problem of managed code versus unmanaged code. C is normally completely unmanaged! There exists no universally accepted convention for C that would allow to allocate memory in one place and deallocate it in another place without exactly knowing how it was allocated. This requires full control of the place where it gets allocated as well as where it gets deallocated. And if that is not both in the caller, that is seldom the case and usually also perverts the idea of libraries almost completely. The only way to not have the caller to preallocate the arrays (and strings) is to have a very strict contract (basically this is one main aspect of what managed code means) in both the caller and callee about how memory is allocated and deallocated. This happens for instance in DLLs that are specifically written to handle LabVIEW native datatypes, so LabVIEW as a caller does not have to preallocate buffers to unknown sizes and the DLL can then allocate and/or resize them as needed and pass them back to LabVIEW. In this case the contract is that any variable sized buffer is allocated and deallocated exclusively by LabVIEW memory manager functions. This works as long as you make sure there is only one LabVIEW kernel mapped into the process that does this. I'm not entirely sure how they solved that, but there must be a lot of trickery when loading a LabVIEW DLL created in one version of LabVIEW into a different version of LabVIEW to make sure buffers are allocated by the same memory manager when using native datatypes. But enforcing to use LabVIEW manager functions to your C users so you can pass LabVIEW native datatypes as parameter is not an option either, since there is no officially sanctioned way to call into the runtime system used by the LabVIEW DLL from a non LabVIEW process. Also your C programmers would likely spew poison and worse if you tell them they have to call this and this function exactly in such a way to prepare and later deallocate the buffers needed, using some obscure (to them) memory manager API. This is not even so much bad intention by NI and the LabVIEW developers but simply how programming works. The only universally safe way of calling functions with buffers is to both allocate and deallocate them in the caller. Anything else requires a very strict regime about memory manager calls to use, that can work if designed in the programming framework from scratch (C#) for instance, but C and C++ existed long before there was any programming framework that would care about such things, and many programmers have attempted to add something to C and C++ like that later, but each came up with a different interface and each of them always will remain an isolated solution not accepted by the huge majority of other C and C++ users. Basically if you want to go the path you described you will have to bite the sour apple and use C pointers for arrays and strings, and require the caller to preallocate those buffers properly.
-
You would like to receive absolution to use the XNodes, despite all the well known comments out there. The only people who can really do that, most likely won't as they are not allowed to do that and we can't other than the few who tried it. Working on them seems a rather crash intense affair, using them seems a bit more safe, but the mileage may vary greatly depending on LabVIEW version, OS and what else, including the position of the moon. What I can safely say is, that there is absolutely no guarantee, that XNodes will not work better or worse in future versions of LabVIEW. They may be improved, left to code rot that will cause more crashes in newer versions, or eventually discontinued entirely and removed from future LabVIEW releases. As such I would consider it a totally irresponsible decision to use them for anything but private experiments.
-
So I've been fighting a bit over the weekend with this and came across a multitude of issues. The first one is that most ZIP utilities at least on Windows, seem to use the OEM codepage to store ASCI information in the ZIP archive, where as LabVIEW as a true GUI application uses of course the default (ASCI codepage). Both are set depending on the language setting in the International Settings control panel but are usually totally different codepages, with similar character glyphs but typically at entirely different code positions. In addition ZIP files have a feature to store the file name as UTF8 string in the archive directory. So far so good. Implementing the correct translation from the LabVIEW ASCI codepage to the OEM codepage and back is fairly trivial on Windows, a bit more complicated on MacOSX and only with limited accuracy since the Mac uses traditionally somewhat different character translation tables than Windows. On Linux it is a complete impossibility without linking to external libraries like iconv, which might or might not be available on particular Linux distributions! So I'm a bit in a limbo here how to go about this, because adding an entire codepage translater into LVZIP for non Windows targets seems like a rather bad overkill. While investigating this I also found another issue entirely independent of LVZIP. Suppose you have a file on your disk with a filename that contains characters not present in the current ASCI codepage of your Windows system! There seems absolutely no way to access this file from within LabVIEW since the LabVIEW path uses internally MultibyteCharacters based on the current ASCI codepage, and if a filename contains characters not present in the current ASCI codepage the LabVIEW path will not be able to represent this filename at all. In case you wonder why such filenames could even exist: unless you use an old FAT file format on your Windows system the filenames are really stored in UTF-16 in the filesystem and Windows Explorer is fully Unicode compliant, so those files happily can exist on the disk and get displayed by Explorer, but not accessed by LabVIEW. And in case you wonder if this is an issue in non Windows systems: On Linux definitely not nowadays since all modern Linux systems use UTF-8 as encoding and it seems LabVIEW also uses whatever is the default Multibyte encoding on the OS, which would be UTF-8 in those cases. For MacOSX I'm not entirely sure since there are about umtien different possible APIs to access the filesystem, depending if you go Carbon, Cocoa, Posix or any mix of it, each of them has its own particular limits and specialties. I really wish they would have made the Path format use UTF-16 internally on Windows long ago and avoid such problems altogether, possibly translating the path to a multibyte encoding when needing to flatten a path in order to keep the Flattened format consistent. But at least all existing filepaths on the disk would be valid then within an application. As it is now, the flattened path isn't really standardized in any way anyhow, as it is flattened to whatever local multibyte setting the OS is configured for, on Windows that's one of the local codepages while on Linux and possibly Mac that's UTF-8 nowadays. So passing a Path through VI server between different LabVIEW installations will run into problems already between different platforms and even between Windows versions using different country locales. Making it all consistently UTF-8 in a flattened format would not really make this worse but rather improve the situation, with one single drawback: Flattened paths on Windows systems stored in older versions of LabVIEW would not automatically be compatible with LabVIEW versions using UTF-8 for flattened paths. Basically I would like to know two things here: 1) First what is the feeling about support for translation of the filename strings on non-Windows systems? Is that important and how much effort is it worth? Consider that support for such translation on embedded targets like VxWorks would be absolutely only possible with the addition of a codepage translater to LVZIP. 2) Has anyone run into trying to access filenames containing characters that the current Windows multibyte table did not support and if so what solution did you choose?
-
Primitives are not stored as an entity on disk, but are directly created by code residing in LabVIEW.exe. The LabVIEW menu palettes then hold links to those pirimitives. Creating primitives not made availabe through the menu palettes is a function of the Create Object primitive that is part of the scripting palette extension. This node has a ring input that contains all the primitives and controls, the LabVIEW executable contains internally.
-
I don't really have an idea how to do it better in LabVIEW, but this use case is specifically, what Generics are for in Java and .Net and I suppose templates in C++, although the template mechanism seems so involved to me that I never tried to understand it nor have used it. The one limitation at least in Java is, that generics only work with object datatypes, not on the primitive datatypes. That is sometimes rather inconvenient but Java also has object types for its primitive datatypes so it's possible to get around that, yet using object types everywhere for primitive datatypes including in arrays and such can have a significant memory impact.
-
Formula nodes: code readability comes at a price
Rolf Kalbermatter replied to Oakromulo's topic in LabVIEW General
The formula node contains a simple text style parser with a subset of the C syntax. The resulting code is compiled into the VI but it has not optimizations at all. After all the LabVIEW developers did not intend to include the entire LabWindows CVI compiler (most likely they didn't even loan code from the CVI compiler for this). NI never said the formula node would be highly optimized but always maintained that it is for the text inclined to be able to define mathematical calculations without the need to translate everything into LabVIEW nodes, but performance of the calculation will be likely always somewhat slower than the equivalent code done with LabVIEW nodes. LabVIEW definitely has not optimized over any structure boundaries in the past and even less over VI boundaries. While LabVIEW recently seems to have started to also do some optimizations over structure boundaries, I would bet that the formula node is not a candidate for this until they change it to compile to the new intermediate DFIR graph that gets then fed into the LLVM compiler for actual compilation of the target code. And while such a change would be technically very nice, it is probably low on the list of things todo, because it is a considerable task, but gives little bang for the buck in terms of marketing it as a feature. Another tangent to this is also that if someone insists on doing complicated formulas in text and is concerned about squeezing the last grain of performance out of it, then he very likely is to go to an exernal library anyhow, since that gives him every choice in what language to choose as well as which compiler toolchain, possibly with highly optimized code generator and/or runtime libraries.- 36 replies
-
Getting the Window Handle for a FP with No Title Bar
Rolf Kalbermatter replied to mje's topic in User Interface
I'm afraid such a list will stay a wet dream. Officially those private properties and methods don't exist and NI people are not supposed to comment on them, but they are the only ones who could really make some educated comments. For me it is just an educated guess. While I think the safety of these two properties is fairly well, as far as rusty nails and other painful accidents are concerned, they were probably made secret since the LabVIEW developers did not want to carve the association between a panel and an OS window in stone. Also as we have seen the move to 64 bit has posed a challenge. Changing the existing property to a different datasize is not really an option as that could lead to very hard to debug bugs, when the truncated value is passed around as a 32 bit entity and eventually interpreted as a handle again, which might be accidentally pointing to an entirely different but still valid object. So in a way not having exposed that property they could easily change it without having to go through 200 documentation change requests at the same time. -
Our friend flran posted them somewhere else on this board. Aristos himself had revealed them at some point by posting a password protected file on the NI site. Of course it didn't take long until someone peeked past the password. But this implementation is definitely to be considered part of the unfinished attic in LabVIEW, with many rusty nails sticking out everywhere and possibly causing you nasty pains. The IDE doesn't necessarily know. In fact the older Visual Studio IDEs did nothing like that, the only thing they knew about was syntax highlighting. But many IDEs (Eclipse too) have nowadays special syntax check modules, that basically contain the entire syntax parser of the compiler already in order to provide such just in time error indications. It's not that the IDE is doing something that is trivial, it's that it pulls in the entire compiler parser to do this. A void wire logically doesn't yet exist in LabVIEW although the LabVIEW internal typecodes to know a void datatype, which is used for various internal things already. It is not an unknown type but a type carrying no data at all.
-
Getting the Window Handle for a FP with No Title Bar
Rolf Kalbermatter replied to mje's topic in User Interface
Hmm, for some reasons it would seem rather strange to support the retrieval of HWND across machines. The HWND really only has any meaning on the system it was created. And if you use the OS Window you are setting yourself up for big troubles once you move to LabVIEW for Windows 64 bit. I suppose you are doing some remote work where you retrieve the handle and pass it back to another function or whatever on the same remote system. The proper way to do such things would be to have the actual handle twiddling all done on the target machine and expose that VI doing this, over the VI server to your other machine(s). -
The issue is rather complicated. I can fairly easily add support for filenames in whatever codepage your Windows system uses as default OEM codepage currently (which is how ZIP file names are supposed to be stored while LabVIEW uses the ACP itself), but there is no simple way to allow support for arbitrarily named files not currently displayable in that codepage. Those files can be correctly seen on modern Windows systems with NTFS filesystem since the filenames get stored as UTF-16 there, but LabVIEW's file functions still are 8 bit codepage based. If you try to open a file in LabVIEW containing characters not currently displayable in the current system codepage, LabVIEW fails fatally since it can not reference such a file at all. So in order to allow LVZIP to compress a directory containing such files into a ZIP file and vice versa, the entire directory enumeration and such would need to be done outside of LabVIEW in the C code in order to allow using the UTF filename feature in ZIP files. But adding an entire ZIP/UNZIP utility to the C code of LVZIP seems a bit like overkill to me. So the question is if it is enough to support foreign characters for the system the file was created with, and an optional setting to force Unicode filenames in the archive. But if you try to archive or unarchive files with characters in the name that can't be displayed by the current Windows codepage, then LabVIEW itself would catastrophally fail when I pass those names to the LabVIEW file IO functions. Also note that the same applies probably for Mac too, and Linux I don't even have an idea yet how to solve this. For the cRIO and Pharlap systems it most likely is not even an option. It's to bad that the LabVIEW developers didn't change the internal File IO API to use Unicode IO functions and extend the Path variable to support Unicode internally. Being a private datatype anyway there would be very little issues with backwards compatibility since whoever has relied on internal details of the Path datastructure would have been going out on his limb already.
-
Buying a pre-owned LabVIEW license....for MAC
Rolf Kalbermatter replied to B Rad's topic in LAVA Lounge
I'm not aware of any conditions that would disallow transfering the ownership of a LabVIEW license to someone else. And in most places such a provision in the license agreement would be null and void. However you need to make sure that you do get the ownership. Simply buying a CD ROM and a license serial number from anywhere might not really give you any rights. I would insist on a document that names the serial number, states the previous owner and also grants an irrecoverable transfer of ownership of that license and serial number to you. Otherwise you may try to register the software at NI and then get told that you are not the rightful owner and when disputing that, the previous owner suddenly may claim to still own the license. I do know for sure that NI does actually care about ownership and has in the past contacted us about licenses that we have originally purchased and sold as part of a whole project, when the actual end user did register the serial number as their own, since the registering entity did not match the purchasing entity. -
Indeed, the zlib library or more generally the zip addition to it do not use mbcs functions. And that has a good reason, as the maintainers of that library want to make the library compilable on as many systems as possible including embedded targets. Those often don't have mbcs support at all. However I'll be having a look at it, since the main part of the naming generation is actually done on the LabVIEW diagram, and that may be in fact more relevant here than anything inside zlib. There might be a relatively small fix to the LabVIEW diagram itself or the thin zlib wrapper that could allow for MBCS names in the zip archive.
-
How do I handle WM_CLOSE the correct way?
Rolf Kalbermatter replied to Michael Aivaliotis's topic in Calling External Code
Ton definitely has pointed out one potential pitfal in your code. The Panel:Close? event always works for me. if you tell LabVIEW to discard it you have to make sure that your state machine then goes into a clean-up state, that eventually closes the front panel explicitly and then also terminates the state machine loop. If your front panel is the top level VI too, the code after the VI:Close method won't be executed in an executable as the LabVIEW runtime engine will be shutdown immediately after the last window is gone, but it helps when the application is run in the development environment. I have personally only used the Application:Close? event in deamon like applications that do not show a front panel by default. The proper operation when a user requests to shutdown the machine is that Windows sends a WM_CLOSE message to every application window, which will result into a Panel:Close? event in the LabVIEW VI, and then a WM_QUIT to the main instance which will trigger the Application:Close? event. But if you handle all the Panel:Close? events properly there should be no panel (hidden or not) left over at the time the Application:Close? event will trigger. On the other hand adding both Panel:Close? and Application:Close? to the state machine handling and going into a proper VI terminate state when you chose to discard that event, wouldn't hurt either. -
Nice to LV being thought about in new technologies.(Bitcoin)
Rolf Kalbermatter replied to ShaunR's topic in LabVIEW General
To be honest, I'm not sure about my position to Bitcoin itself. I tried to understand what it was about by visiting the forum there, but don't really got a clear picture. From some of the remarks there, it seems to be used by some folks in somewhat questionable ways. But then anything that represents some value in some form, even virtual currency or items in online games, is quickly attracting some folks who have more than questionable intent. So that alone is certainly not a criterium if the the idea of Bitcoin could be considered legitimate. However it seems to me some form of online virtual currency that only exists by the gratitude of people believing that it represents some value. That in itself is an interesting concept and in fact not so much different than even our official currencies we pay with all the time, and probably even more real than derivatives on stock exchange markets. However who controls the creation of that value? In other words how are Bitcoins created? And I'm not so much interested in the technical details here as is already mentioned in this thread by the formula, than the process and procedure that controls the creation. If it could be created by anyone in any number it would obviously loose all value.