Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. Shaun has basically said all. Your .sys driver is a Windows kernel driver (a really more or less unavoidable thing if you want to access register addresses and physical memory, which is what PCI cards require). This kernel driver will definitely not be possible to be loaded into Pahrlap as the Pharlap kernel works quite a bit different than the Windows kernel. For one it's a lot leaner and optimized for RT tasks, while the Windows kernel is a huge thing that tries to do just about everything. The DLL simply is the user mode access library for the kernel driver, to make it easier to use it. Even if that DLL would be Pharlap compatible, which is actually highly unlikely if they used a modern Visual C compiler to create it, it would not help since the real driver is located in the kernel driver and can't be used under Pharlap anyways. Writing a kernel driver is just as Shaun says a very time consuming and specialized work. It's definitely one of the more advanced C programming tasks and requires expert knowledge. Also debugging it is a pain in the ass: Everytime you encounter an error you usually have to restart the system, make the changes, compile and link the driver, install it again and then start debugging again. This is because if your kernel driver causes a bad memory access your entire system is potentially borked for good and continuing to run from there could have catastrophic consequences for your entire system integrity. Writing a Pharlap kernel driver is even more special, since there is very little information available about how to do it. And it requires one to buy the Pharlap ETS development license which is also quite an expense. That all said, I got a crazy idea, that I'm not sure has any merits. VISA allows to access hardware resources on register level by creating an INF file on Windows with the VISA Driver Wizard. Not sure if this is an option under LabVIEW RT, this document seems vague about if only NI-VISA itself is available under RT or also the driver Wizard itself (or more precisely the according VISA Low Level Register driver as you could probably do the development under normal Windows and then copy the entire VI hierarchy and INF file over to the RT system, if the API is supported).
  2. LabVIEW uses URL format for its XML based paths which happens to be always Unix style. Symbolic paths are rather something like "<instrlib>/aardvark/aardvark.llb/Aardvark GPIO Set.vi", however the HTML expansion does make that a little less obvious in Dan's post. To my knowledge LabVIEW only should use absolute paths as reference if they happen to refer to a different volume. On real Unix systems this is of course not an issue as there you have one unique filesystem root, but I have a hunch your colleague may have accessed the VIs through a mounted drive letter. I could see that causing possible problems if the VI library was loaded through a different drive letter than the actual VI. Shouldn't usually be possible but its not entirely impossible. The actual path in the XML file may not show to be different because the path handling that determines if paths are on the same volume works likely on a different level and when the paths are finally converted to the URL style format they are most likely normalized, which means reducing the path to its minimal form and that could resolve drive aliases.
  3. Of course there are different ways an image could be invalid. However considering he was looking for a simple LabVIEW VI "to check if the image data [of an input terminal] is valid or not" it seemed like a logical consideration that he might be looking for something along the lines of the Not a Number/Path/Refnum node. And since IMAQ images are in fact simply a special form of refnum too, which I think isn't obvious to most, I wanted to point out that this might be the solution. He probably wants an easy way to detect if the input terminal received a valid image refnum. Anything else will require implementation specific use of one or more IMAQ Vision VIs to check if the specific properties or contents of the valid image reference meet the desired specifications.
  4. It requires a little out of the box thinking but try the Not a Number, Path, Refnum primitive.
  5. Which isn't a bad thing if you intend to distribute the VIs to other platforms than Windows!
  6. That function does not do the same than what this VI does. For one the string part in the lvzip library is always in the unix form, while the other side should be in the native path format. Try to convert a path like /test/test/test.txt on Windows with this function. Of course you can replace all the occurrences of into / on Windows, in the resulting string, but that just complicates the whole thing. I probably end up putting that entire function into the actual shared library function, since it also needs to do character encoding conversion too to allow to work with filenames containing extended ASCII (or UTF8) characters. And to make everything interesting the whole encoding is VERY different on each platform. The strings in the ZIP file are under Windows normally stored with the OEM charset while LabVIEW as a true GUI application uses everywhere the ANSI codepage. Basically they are both locale (country) specific and contain more or less the same characters but of course on different places! That is the reason that filenames containing extended characters look wrong when extracted with the lvzip library currently. On other platforms there isn't even a true standard with the various ZIP tools as to how to encode the filenames in an archive. It usually just uses whatever is the current locale on the system, which at least on modern Unixes is usually UTF8. The ZIP format allows for UTF8 filenames too, but since on Unix most ZIP tools are programmed to use the current locale, they do store UTF8 names but do not set the flag that says so! Of course there are also many ZIP tools that still don't really know about UTF8 too so extracting an archive with them that was created with UTF names causes serious trouble. Basically there is no absolutely correct way to make lvzip deal properly with all these things. My plan is to make it work such that for packing it uses only standard ASCI when the filenames don't contain any extended character and otherwise always use UTF8. For unpacking it will have to deal with the UTF8 flag and otherwise assume whatever is the current locale, which can and will go wrong if the archive wasn't created with the same locale than it is retrieved. On Unix there is no good way to support extraction of files with extended ASCII characters at all, unless I pull in iconv or iuc as a dependency.
  7. I'm trying to look at this. I assume you work on OS X 10.8? Basically all Carbon type file IO functions seem to have been depreciated in 10.8. And one of them probably has a hickup now. The translation of Mac OS errors to LabVIEW errors is always a tricky thing, and I know I could probably have put more effort into that routine in the C code, yet it's mostly useless information anyhow, other than that it went wrong somehow. My current difficulty is that I do not have a modern Mac available that could run 10.8 in any way. So I have to work on an old (and terriiiiiibly sloooooooow) PPC machine for the moment. Should still be able to test and compile the code to get at least running for 10.5 and then will have to get you to run some tests. I just want you to know that I'm working on this, but I can't make any tight promises as to when the new Mac OS X sharedlib will be ready for you to test. Having a more modern Mac available would help but I have to work with what I have here.
  8. That change was made on April 10, 2011 to the VI in Subversion. Not sure when the latest release of the ZLIB library was created by JGCode. It might have been just before that. On April 11, 2011 an additional change was made to also support Pharlap and VxWorks targets and on July 17, 2012 an additional change to support Windows 64 Bit (which is still irrelevant as the DLL is not yet released for 64 bit). I have a lot of code changes on my local system mostly to support 64 bit but also some fixes to the string encoding problem but it is all in an unfinished state and I hesitate to commit it to the repository as anyone trying to create a new distribution library of that would create a likely somewhat broken library currently. I'm also not exactly sure about the current procedure to create a new library release as well as the mentioned icon palette changes in the last release made by JGCode. I didn't see any commits of those changes to the repository. Otherwise I might have created a new release myself with the current code.
  9. While this is simple it is a brute force approach. Users likely will not like that their user interface suddenly uses the "wrong" decimal point, since this setting changes the decimal sign for everything in the LabVIEW app. The better approach is to think about localization and make the code use explicit formats where necessary and leave the default where things are presented to users. For the legacy string functions you have the mentioned boolean input, for Scan From String and Format into String you have the %.; or %,; prefix to the format string which tells the function to use an explicit decimal sign. Basically anything that goes to the UI would use the system default (and no prefix in the format string), things that communicate with a device or anything similar will likely be done with the %.; prefix. This way the user gets to see the decimal numbers in whatever way he is used to and the communication with GPIB, TCP/IP and whatever devices will work irrespective of the local country settings.
  10. Well imagCloseToolWindow() definitely is an IMAQ (Ni Vision) function and as such never could be located in the avcodec.dll. Seems like the linker is messing up the import tables somehow when told to optimize the import and/or export tables. Could be because of some wizardy in the NI Vision DLL but certainly seems a bug of the link stage optimizer in the compiler. Is this Visual C or LabWindows CVI?
  11. Why would you call the HTTP Get.vi by reference and not put it simply inside your Active Object? You don't happen to use the same VI ref in each Active Object because this will basically serialize the calls since a VI ref is exactly one instance of the VI with one data space. Instead simply dropping the VI into your Active Object VI will propely allocate a new instance of the reentrant VI for every time you drop it on a diagram somewhere. Since the VI is set to be reentrant I would assume strongly that whatever Call Library Node is inside will be set to execute in any thread too. There might be some arbitration underneath in the DLL that could serialize the calls somewhat but I would first try to get rid of any VI server invocation before speculating to deep about non-reentrancy of otherwise properly reentrant configured VIs. Or go with websockets like Shaun suggested.
  12. you should only use _WIN64 if you intend to use the resulting VI library in LabVIEW for Windows 64. And for the datatypes that are size_t (the HUDAQHANDLE) you should afterwards go through the entire VI library and change all Call Library Node definitions of them to pointer sized integer manually. Since the developer of the DLL decided to change the calling notation of the functions between 32 bit and 64 bit DLL, you have to be careful to import the header file for each platform seperatly (with and without the _WIN64 declaration) and keep those VI libraries seperate and use whatever version you LabVIEW version is. It does not matter what OS bitness you have but the bitness of the LabVIEW installation you use. So if you use LabVIEW 32 bit even on Windows 64 bit, you have to leave the _WIN64 keyword away from the definitions. Going strictly with size_t=unsigned integer, will cause the handle to be defined as 32 bit integer and therefore get truncated to 32 bit on LabVIEW for 64 bit systems (and obviously corrupt the handle in that way). Setting it to strictly 64 bit integer however will pass 2 32 bit values on the stack and therefore misalign the premaining parameters on the stack. The two functions you name, are NOT declared in the header file and therefore can not be imported by the Import Library Wizard. As to the errors, the parser apparently gets a hickup on the nested enum declarations. You will have to create that as a Ring Control manually.
  13. LabVIEW's HTTPS support uses most likely OpenSSL for the SSL implementation. OpenSSL comes with it's own list of Root CA and does AFAIK not try to access any platform specific CA stores. As such the only options for self signed server CAs is to either skip the verification of the server certificate or to try to import the self signed certificate into the session. I think the SSL.vi or Config SSL.vi should allow to do that.
  14. You don't need any of those defines except _WIN32 (and possibly _WIN64) (and case is important). But you also need to define size_t. This is not a standard C datatype but one typically defined in one of the standard headers (stddef.h) of the compiler. The exact define is in fact compiler dependent but for Visual C size_t=unsigned __int64 ; for 64 bit size_t=unsigned int ; for 32 bit should do it. But I'm not sure if the import library wizard would recognize the Visual C specific type "__int64" nor if the more official "long long" would be recognized, since I don't normally use the import library wizard at all.
  15. Scripting wasn't taboo, just hidden. While one could argue that adding an undocumented INI key to a configuration file might be already a violation according to the DMCA, breaking a password protection that is not super trivial to guess, definitely is and could be taken to court. Heck as I understand DMCA, doing anything that the software vendor simply has said "Don't do it!" is already a violation. And NI didn't make the DMCA so it is indeed not them who make the law. Nevertheless the law is there for them to use, only in the case of the VI password website it does not fall under the DMCA, but Germany has also some laws that go into that direction. I have a feeling that the only reason NI hasn't really done to much about this so far, is that they didn't think forcing a lawsuit after the fact is doing much good, both in terms of damage control as well as being the big bad guy going after a poor user. I'm sure they would have much less hesitation to go after a corporation trying to use this in any way and most likely have done so in the past already.
  16. I'm not entirely sure either but I think your VI more or less captures the issue although there might need to be some more work. Basically you can close the front panel of a subVI while it is in paused mode and that can happen easily when having a number of panels open. I for myself also tend to look for windows regularly that I had just worked on and more or less inadvertently closed. If the VI was paused it will stay paused, usually preventing at least parts of the rest of the program to continue. So I then have to go and dig for the VI that is paused to make it resume. This is not different to multithreading programming in other languages. If a thread is blocked in single step mode, the rest of the program can usually still continue (not all debuggers support that though as they lack a good way to get an overview of all current threads) and that can have various effects such as stalling other threads too who wait for events and data from the blocked thread or also just pausing whatever the thread in question should do. Without a central overview of all threads and their paused or active state you end up in a complete nighmare. The only thing about your VI that I'm not sure about is if setting the Frontmost property alone is always enough. I could imagine situations where it may be necessary to do additional work like first opening the front panel or unlocking something else.
  17. Actually that is not entirely true. If you have an iterable object in Java or .Net you can write somewhat more tense code. Java uses: for (<type> obj : objList) and .Net has a special keyword foreach (<type> obj in objList)
  18. Not GDI+ but GDI. And it's not 10 years later but 20 years later. But you have to consider the implications. Anything 2D based in LabVIEW is done using a 2D internal API that maps to whatever drawing primitive API is available on the platform. For MacOS that was Quickdraw (and probably still is, as rewriting the code to interface to Quartz (CoreGraphics) is anything but a trivial job, for Windows that was and is GDI, and for Unix systems that is X Windows. All of them are the basic drawing APIs most applications are using even nowadays to perform UI drawing, if they don't only use standard system widgets and don't draw anything themselves. OpenGL is almost only used in applications with 3D output rendering, and GDI+ is by most applications only indirectly used since the .Net UI framework builds on that. There are very few applications that make actually directly use of any of the features that GDI+ offers. And DirectX is an API that is almost exclusively used by game engines if they don't want to have multiplatform support, otherwise they would likely go to OpenGL instead. For most anything in most applications, the basic GDI, Quickdraw, Quartz and X Windows systems are actually more than fast enough and LabVIEW makes no real difference there. The only areas where it would benefit somewhat from a faster API are possibly graphs (although I would not expect a huge speed improvement there) and the 2D picture control. But you have to weight always the effort and the benefit, as well as the possible outfall because of incompatibilities. Rewriting the 2D graphic primitive system to take advantage of newer technologies just for the Picture Control would be way to costly, changing it for anything 2D related would not bring much benefit speed wise, but be a resource intense project and cause likely all kinds of problems in terms of many bugs introduced but also much more subtle issues, such as small but sometimes also more visible differences in the visual appearance of the entire UI rendering. In short a lot of work for little visible benefit and with the likely chance to get criticisme for pixel to pixel inaccuracies in comparison to the previous implementation. Rewriting a perfectly working subsystem in an existing app, is always a very risky business and LabVIEW is no difference there. The Picture Control which is the only component in LabVIEW that would probably gain much from that is not a core component as it is only used by a few. Now there might be a rewrite at some point especially when retina type displays get the new standard and it gets more and more important to have subpixel rendering inside the app. But LabVIEW isn't a prime candidate for that and the LabVIEW developers won't take on such a project just for fun and to be able to brag about it! Also retina type support under Windows isn't really ready yet and probably quite some time away on Unix.
  19. Strictly speaking, GPL indeed could be a problem as it is thought by many that the linking clause in the GPL license applies in fact also to dynamic linking, which the ODBC manager has to do no matter what, if he wants to use any ODBC driver. This is exactly the reason why LGPL was invented, which maintains all the protection of GPL on the library code but allows linking the library with other non (L)GPL based code without breaking the license. Again here some feel that LGPL only really allows dynamic linking and static linking is not really proper. I personally tend to agree with this, mostly to be on the safe side. However considering that myODBC is a shared library and the non-GPL ODBC manager in Windows will have to load it and link to it dynamically in ALL cases, the question rises of course what use the myODBC driver would have on Windows if the GPL license would not allow it to be loaded and dynamically linked to by non GPL software. So either the GPL license has to be interpreted at least in this case to intend to allow dynamic linking or the choice of the GPL license instead of LGPL by the myODBC developers is simply stupid. Unfortunately I can't find any specifics about the license of the Connector/ODBC component, just that MySQL itself is GPL, which indeed would make one assume that Connector/ODBC falls under the same license. In any case the LabVIEW Database Toolkit has no direct relation to the myODBC driver, as there lies in fact at least the Windows ODBC manager and then the Windows ODBC-to-ADO bridge in-between. So if loading and using the driver in any Windows ODBC based application is to be considered alright, then using it with the Database Toolkit has to be alright too.
  20. Actually most vector based formats are somehow build like the LabVIEW Picture Control stream. Not sure about SVG but WMF/EMF or the Macintosh PICT format are like that. Most likely the performance of the Picture Control comes partly during building of the picture stream with all those Append String nodes that creates a rather bad performance and then in the rendering because it is build on LabVIEW internal routines that then map more or likely less directly to the platform API. Also the fact that it is likely mapping into good old GDI instructions under Windows rather than going into a more performing interface like GDI+ or OpenGL. But GDI+ or OpenGL was not an option in LabVIEW 2.5 when this was invented and porting it later to use these new APIs would likely cause many compatibility issues that could introduce breaking issues to existing applications. The 3D picture control (the built in one, not the ActiveX control) should be much better in terms of performance although it is not ideal for 2D vector based images but specifically meant for 3D visualization.
  21. Actually many Toolkits work just fine under the Mac. It's mostly the installers that support only Windows. Installing the Toolkit on a Windows system and copying it over to a Mac works for many of them, as long as they are all VI based and don't contain external code in the form of DLLs.
  22. The C runtime is mostly backwards compatible. There have been hickups both with the MS VC C runtime and also the GCC clib in the past. MS "solved" the problem by forcing the MS C runtime to be a SxS dependendy, which will load whatever version of the C runtime library that was used to compile the executable module (DLL and EXE) and in that way created a new huge problem. If a module was compiled in a different VC version, it will load a different C runtime version into the process and havoc begins, if you start passing any C runtime objects between those modules. This includes heap pointers that can not be freed in a different C runtime library than they were created, and shows that even when using just the C runtime, you have to make sure to allocate and destroy memory objects always from the same C runtime scope. But also more fundamental things like file descriptors are a problem. Basically anything that has a filedescriptor in the function interface will completely fail if made to operate on objects that were created in a different C runtime library. Also exception handling is another area that changes with every Visual C version significantly and can have nasty effects if it gets mixed. This is all rather painful and also seems mostly unnecessary if you think about the fact that almost everything in the MS C runtime library ultimately maps to WinAPI calls at some point. Also for itself they avoided this problem by making all of their Windows tools link to the original msvcrt.dll, but declaring that private after Visual C 6. The only way to make use of msvcrt.dll instead of msvcrtxx.dll is by either using Visual C 6 or older WinDDK compiler toolchains.
  23. Thanks for clarifying Greg. I was pretty sure that this was the case like this, but started to wonder after Shauns reply. Other than that I do however fully agree with Shaun. DLLs are not evil but more complicated in terms of maintenance and distribution since you need one DLL/shared library for every potential target OS, and if the DLL is a LabVIEW DLL it gets even a little more involved. For that reason distributing LabVIEW created DLLs for LabVIEW users is mostly a pain in the ass and will likely annoy the LabVIEW user too, as he can't look into the driver and debug it if the need should arise. Distributing a LabVIEW DLL driver for non-LabVIEW users is however a possibility although the fact that one needs to have the correct LabVIEW runtime installed is probably going to cause some sputtering by some C/C++/whatever users. Hmm, could you clarify a bit what you were trying to do there? It's not easy to guess what you try to demonstrate from this message box and that makes it very hard to come up with some ideas to explain the behavior you see. To me this looks like a LabVIEW 2012 generated shared library that you try to load on a computer that does not have the 2012 runtime engine installed.
  24. I also read some criticisme about OO. Not that it says functional programming is better than OO programming. Neither is better than the other in general AFAIC, but many OO fanatics tend to pull the OO sword for everything even if a simple functional approach would be much easier and quicker. OO has its merits but making functional programming more or less impossible like Java actually does is simply taking the idea over the top. And that are not the only issues I have with Java, but I haven't thrown it away yet.
  25. There is no free ride! A DLL/shared library is always platform specific and that means for CPU architecture, OS and bitness. All three have to match for the shared library to be even loadable. That is why distributing a LabVIEW written driver as shared library is probably one of the worser ideas one can have. The same effect you get when distributing VI's without diagram. Because that is what basically is inside the shared library. And no unfortunately you can't leave the diagrams intact inside the DLL and hope that it will still work when loaded into a different version of LabVIEW eventhough the bitness or OS doesn't work. The DLL still executes in the context of the runtime engine which has no compiler or even the possibility to load the diagram into memory. The most user friendly approach is to distribute the instrument driver as LabVIEW source (I personally consider instrument drivers distributed as DLL/shared library at most as a compromise but loath it) and create a shared library from it for those non-LabVIEW users and worry about OS/bitness version and such as requests come in. There won't be any way around creating special versions of your test program that access the DLL instead of the native driver, for testing the shared library version. The upside of this is that debugging of any driver related issues during testing is MUCH easier when you leave everything as diagram, and only check after the final build that it also works as DLL. Fortunately the only one that can not be created by LabVIEW is the VxWorks shared library! But I really echo Shauns comments. If you have any chance to avoid the shared library for your LabVIEW users, you save yourself a lot of pain and sweat and make your LabVIEW users much happier too. Building multiple shared libraries after every modification of your LabVIEW code is no fun at all. And LabVIEW only creates shared libraries for the platform it is running on, so you need to have as many (virtual) OS/LabVIEW installations as you want to support platforms for, and test them each and every one as well after each build.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.