Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,871
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. Well but the 9501 is in the cRIO too! So do you mean that the SV is once written in the RT application during initialization and once in the FPGA code or such? Because if it is both done in the RT code I still think you have basically only one source of the data for this and can encapsulate it in a non re-entrant buffer VI that will make sure to synchronize access to the local variable and the SW.
  2. Actually it's not misleading at all. If you specify a service name rather than a port number the LabVIEW node will query the "service locator service" on the target machine to query the port number it should use. This "service locator service" is part of the NI webserver. Your target application when specyfing a service name to the Open TCP Listener will register itself with the allocated port in the local "service locator service". So you have two options here: 1) document this behaviour in your application manual and sell it as a feature 2) Change your server application to use an explicit port when registering and your client to listen to that port Note to others coming across this: In order for the service name registration to work in the LabVIEW TCP/IP nodes, one needs to make sure to have the NI System WebServer component installed on the server machine. If you build an application installer don't forget to select that component in the Additional Installs section (part of the according LabVIEW runtime engine).
  3. I think you just circumvented the race condition but didn't really solve it. Suppose your network has a hiccup and the update gets delayed for longer than your small delay! A possible solution would be to have a local copy of the shared variable that is the real value and whenever you update it you update also the shared variable. Of course this will only work if you can limit writing of that SV to one location.
  4. You are unreasonable and you know that. Moving to C++ to make sure to not run into problems or bugs sounds like a plan, but a bad one. Which C compiler do you plan to use, GCC, Visual C, Intel CC, Borland C, Watcom C? They all are great but none of them is bug free and when you have managed a few more complex C++ projects you will know that, as you are sure to run into some code constructs that do not always get compiled right by some of them. Nothing that can't be worked around but still. And that is just the compiler. Lets not talk about the IDE's that have all their obnoxious behaviors and bugs too. So the question at the end of the day that remains is what are the people at your company more familiar with and how much code can be produced and how bug free can you and your colleagues make your own code. Changing programming languages because of an (acquired) antipathy is a sure way for disaster.
  5. I would have to echo Jordan's comments. NI isn't perfect but certainly one of the better suppliers in software land. And as software developer yourself you should know that fixing a bug without a reproducible error procedure is very hard and often impossible. So far all that is mentioned in this thread are symptoms and possible reasons of what could have been a contributing factor to what you saw happening. Without more info about how to produce this error it most likely will be almost impossible to come up with a fix.
  6. In addition to what Ned says, Telnet is a protocol in itself that sits on top of TCP. So just going to send the string that you normally enter in the Telnet prompt definitely will not work! You have to implement the Telnet protocol (which is quite simple BTW) on top of the TCP primitives too. However the Internet Toolkit contains a full Telnet client library.
  7. Well there might be some sort of bug in LabVIEW here, but it seems that LabVIEW for some reasons was made to believe at that point that the lvlib and the according VIs would not come from the same volume (drive). That is AFAIK the only reason for LabVIEW to use absolute paths when referring from one resource file to another one it would depend on. When loading that lvlib, your collegue probably got a warning to the fact that VIs where loaded from a different path than where they were expected (maybe he had them still in memory loaded from the old path). This dialogue is however IMHO rather unhelpful in many cases, as it does not always give a good overview as to why this warning was created and offers even less possibilities to fix it.
  8. Shaun has basically said all. Your .sys driver is a Windows kernel driver (a really more or less unavoidable thing if you want to access register addresses and physical memory, which is what PCI cards require). This kernel driver will definitely not be possible to be loaded into Pahrlap as the Pharlap kernel works quite a bit different than the Windows kernel. For one it's a lot leaner and optimized for RT tasks, while the Windows kernel is a huge thing that tries to do just about everything. The DLL simply is the user mode access library for the kernel driver, to make it easier to use it. Even if that DLL would be Pharlap compatible, which is actually highly unlikely if they used a modern Visual C compiler to create it, it would not help since the real driver is located in the kernel driver and can't be used under Pharlap anyways. Writing a kernel driver is just as Shaun says a very time consuming and specialized work. It's definitely one of the more advanced C programming tasks and requires expert knowledge. Also debugging it is a pain in the ass: Everytime you encounter an error you usually have to restart the system, make the changes, compile and link the driver, install it again and then start debugging again. This is because if your kernel driver causes a bad memory access your entire system is potentially borked for good and continuing to run from there could have catastrophic consequences for your entire system integrity. Writing a Pharlap kernel driver is even more special, since there is very little information available about how to do it. And it requires one to buy the Pharlap ETS development license which is also quite an expense. That all said, I got a crazy idea, that I'm not sure has any merits. VISA allows to access hardware resources on register level by creating an INF file on Windows with the VISA Driver Wizard. Not sure if this is an option under LabVIEW RT, this document seems vague about if only NI-VISA itself is available under RT or also the driver Wizard itself (or more precisely the according VISA Low Level Register driver as you could probably do the development under normal Windows and then copy the entire VI hierarchy and INF file over to the RT system, if the API is supported).
  9. LabVIEW uses URL format for its XML based paths which happens to be always Unix style. Symbolic paths are rather something like "<instrlib>/aardvark/aardvark.llb/Aardvark GPIO Set.vi", however the HTML expansion does make that a little less obvious in Dan's post. To my knowledge LabVIEW only should use absolute paths as reference if they happen to refer to a different volume. On real Unix systems this is of course not an issue as there you have one unique filesystem root, but I have a hunch your colleague may have accessed the VIs through a mounted drive letter. I could see that causing possible problems if the VI library was loaded through a different drive letter than the actual VI. Shouldn't usually be possible but its not entirely impossible. The actual path in the XML file may not show to be different because the path handling that determines if paths are on the same volume works likely on a different level and when the paths are finally converted to the URL style format they are most likely normalized, which means reducing the path to its minimal form and that could resolve drive aliases.
  10. Of course there are different ways an image could be invalid. However considering he was looking for a simple LabVIEW VI "to check if the image data [of an input terminal] is valid or not" it seemed like a logical consideration that he might be looking for something along the lines of the Not a Number/Path/Refnum node. And since IMAQ images are in fact simply a special form of refnum too, which I think isn't obvious to most, I wanted to point out that this might be the solution. He probably wants an easy way to detect if the input terminal received a valid image refnum. Anything else will require implementation specific use of one or more IMAQ Vision VIs to check if the specific properties or contents of the valid image reference meet the desired specifications.
  11. It requires a little out of the box thinking but try the Not a Number, Path, Refnum primitive.
  12. Which isn't a bad thing if you intend to distribute the VIs to other platforms than Windows!
  13. That function does not do the same than what this VI does. For one the string part in the lvzip library is always in the unix form, while the other side should be in the native path format. Try to convert a path like /test/test/test.txt on Windows with this function. Of course you can replace all the occurrences of into / on Windows, in the resulting string, but that just complicates the whole thing. I probably end up putting that entire function into the actual shared library function, since it also needs to do character encoding conversion too to allow to work with filenames containing extended ASCII (or UTF8) characters. And to make everything interesting the whole encoding is VERY different on each platform. The strings in the ZIP file are under Windows normally stored with the OEM charset while LabVIEW as a true GUI application uses everywhere the ANSI codepage. Basically they are both locale (country) specific and contain more or less the same characters but of course on different places! That is the reason that filenames containing extended characters look wrong when extracted with the lvzip library currently. On other platforms there isn't even a true standard with the various ZIP tools as to how to encode the filenames in an archive. It usually just uses whatever is the current locale on the system, which at least on modern Unixes is usually UTF8. The ZIP format allows for UTF8 filenames too, but since on Unix most ZIP tools are programmed to use the current locale, they do store UTF8 names but do not set the flag that says so! Of course there are also many ZIP tools that still don't really know about UTF8 too so extracting an archive with them that was created with UTF names causes serious trouble. Basically there is no absolutely correct way to make lvzip deal properly with all these things. My plan is to make it work such that for packing it uses only standard ASCI when the filenames don't contain any extended character and otherwise always use UTF8. For unpacking it will have to deal with the UTF8 flag and otherwise assume whatever is the current locale, which can and will go wrong if the archive wasn't created with the same locale than it is retrieved. On Unix there is no good way to support extraction of files with extended ASCII characters at all, unless I pull in iconv or iuc as a dependency.
  14. I'm trying to look at this. I assume you work on OS X 10.8? Basically all Carbon type file IO functions seem to have been depreciated in 10.8. And one of them probably has a hickup now. The translation of Mac OS errors to LabVIEW errors is always a tricky thing, and I know I could probably have put more effort into that routine in the C code, yet it's mostly useless information anyhow, other than that it went wrong somehow. My current difficulty is that I do not have a modern Mac available that could run 10.8 in any way. So I have to work on an old (and terriiiiiibly sloooooooow) PPC machine for the moment. Should still be able to test and compile the code to get at least running for 10.5 and then will have to get you to run some tests. I just want you to know that I'm working on this, but I can't make any tight promises as to when the new Mac OS X sharedlib will be ready for you to test. Having a more modern Mac available would help but I have to work with what I have here.
  15. That change was made on April 10, 2011 to the VI in Subversion. Not sure when the latest release of the ZLIB library was created by JGCode. It might have been just before that. On April 11, 2011 an additional change was made to also support Pharlap and VxWorks targets and on July 17, 2012 an additional change to support Windows 64 Bit (which is still irrelevant as the DLL is not yet released for 64 bit). I have a lot of code changes on my local system mostly to support 64 bit but also some fixes to the string encoding problem but it is all in an unfinished state and I hesitate to commit it to the repository as anyone trying to create a new distribution library of that would create a likely somewhat broken library currently. I'm also not exactly sure about the current procedure to create a new library release as well as the mentioned icon palette changes in the last release made by JGCode. I didn't see any commits of those changes to the repository. Otherwise I might have created a new release myself with the current code.
  16. While this is simple it is a brute force approach. Users likely will not like that their user interface suddenly uses the "wrong" decimal point, since this setting changes the decimal sign for everything in the LabVIEW app. The better approach is to think about localization and make the code use explicit formats where necessary and leave the default where things are presented to users. For the legacy string functions you have the mentioned boolean input, for Scan From String and Format into String you have the %.; or %,; prefix to the format string which tells the function to use an explicit decimal sign. Basically anything that goes to the UI would use the system default (and no prefix in the format string), things that communicate with a device or anything similar will likely be done with the %.; prefix. This way the user gets to see the decimal numbers in whatever way he is used to and the communication with GPIB, TCP/IP and whatever devices will work irrespective of the local country settings.
  17. Well imagCloseToolWindow() definitely is an IMAQ (Ni Vision) function and as such never could be located in the avcodec.dll. Seems like the linker is messing up the import tables somehow when told to optimize the import and/or export tables. Could be because of some wizardy in the NI Vision DLL but certainly seems a bug of the link stage optimizer in the compiler. Is this Visual C or LabWindows CVI?
  18. Why would you call the HTTP Get.vi by reference and not put it simply inside your Active Object? You don't happen to use the same VI ref in each Active Object because this will basically serialize the calls since a VI ref is exactly one instance of the VI with one data space. Instead simply dropping the VI into your Active Object VI will propely allocate a new instance of the reentrant VI for every time you drop it on a diagram somewhere. Since the VI is set to be reentrant I would assume strongly that whatever Call Library Node is inside will be set to execute in any thread too. There might be some arbitration underneath in the DLL that could serialize the calls somewhat but I would first try to get rid of any VI server invocation before speculating to deep about non-reentrancy of otherwise properly reentrant configured VIs. Or go with websockets like Shaun suggested.
  19. you should only use _WIN64 if you intend to use the resulting VI library in LabVIEW for Windows 64. And for the datatypes that are size_t (the HUDAQHANDLE) you should afterwards go through the entire VI library and change all Call Library Node definitions of them to pointer sized integer manually. Since the developer of the DLL decided to change the calling notation of the functions between 32 bit and 64 bit DLL, you have to be careful to import the header file for each platform seperatly (with and without the _WIN64 declaration) and keep those VI libraries seperate and use whatever version you LabVIEW version is. It does not matter what OS bitness you have but the bitness of the LabVIEW installation you use. So if you use LabVIEW 32 bit even on Windows 64 bit, you have to leave the _WIN64 keyword away from the definitions. Going strictly with size_t=unsigned integer, will cause the handle to be defined as 32 bit integer and therefore get truncated to 32 bit on LabVIEW for 64 bit systems (and obviously corrupt the handle in that way). Setting it to strictly 64 bit integer however will pass 2 32 bit values on the stack and therefore misalign the premaining parameters on the stack. The two functions you name, are NOT declared in the header file and therefore can not be imported by the Import Library Wizard. As to the errors, the parser apparently gets a hickup on the nested enum declarations. You will have to create that as a Ring Control manually.
  20. LabVIEW's HTTPS support uses most likely OpenSSL for the SSL implementation. OpenSSL comes with it's own list of Root CA and does AFAIK not try to access any platform specific CA stores. As such the only options for self signed server CAs is to either skip the verification of the server certificate or to try to import the self signed certificate into the session. I think the SSL.vi or Config SSL.vi should allow to do that.
  21. You don't need any of those defines except _WIN32 (and possibly _WIN64) (and case is important). But you also need to define size_t. This is not a standard C datatype but one typically defined in one of the standard headers (stddef.h) of the compiler. The exact define is in fact compiler dependent but for Visual C size_t=unsigned __int64 ; for 64 bit size_t=unsigned int ; for 32 bit should do it. But I'm not sure if the import library wizard would recognize the Visual C specific type "__int64" nor if the more official "long long" would be recognized, since I don't normally use the import library wizard at all.
  22. Scripting wasn't taboo, just hidden. While one could argue that adding an undocumented INI key to a configuration file might be already a violation according to the DMCA, breaking a password protection that is not super trivial to guess, definitely is and could be taken to court. Heck as I understand DMCA, doing anything that the software vendor simply has said "Don't do it!" is already a violation. And NI didn't make the DMCA so it is indeed not them who make the law. Nevertheless the law is there for them to use, only in the case of the VI password website it does not fall under the DMCA, but Germany has also some laws that go into that direction. I have a feeling that the only reason NI hasn't really done to much about this so far, is that they didn't think forcing a lawsuit after the fact is doing much good, both in terms of damage control as well as being the big bad guy going after a poor user. I'm sure they would have much less hesitation to go after a corporation trying to use this in any way and most likely have done so in the past already.
  23. I'm not entirely sure either but I think your VI more or less captures the issue although there might need to be some more work. Basically you can close the front panel of a subVI while it is in paused mode and that can happen easily when having a number of panels open. I for myself also tend to look for windows regularly that I had just worked on and more or less inadvertently closed. If the VI was paused it will stay paused, usually preventing at least parts of the rest of the program to continue. So I then have to go and dig for the VI that is paused to make it resume. This is not different to multithreading programming in other languages. If a thread is blocked in single step mode, the rest of the program can usually still continue (not all debuggers support that though as they lack a good way to get an overview of all current threads) and that can have various effects such as stalling other threads too who wait for events and data from the blocked thread or also just pausing whatever the thread in question should do. Without a central overview of all threads and their paused or active state you end up in a complete nighmare. The only thing about your VI that I'm not sure about is if setting the Frontmost property alone is always enough. I could imagine situations where it may be necessary to do additional work like first opening the front panel or unlocking something else.
  24. Actually that is not entirely true. If you have an iterable object in Java or .Net you can write somewhat more tense code. Java uses: for (<type> obj : objList) and .Net has a special keyword foreach (<type> obj in objList)
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.