Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,786
  • Joined

  • Last visited

  • Days Won

    245

Everything posted by Rolf Kalbermatter

  1. Hmm I know them as I wrote that library :-) But the status they return is completely independant of TCP/IP or any other network protocol. They return the link status of the network connection (in fact the detection of the carrier on the network cable when it is connected). However be aware that that may not always work, although today that may not be as bad anymore. It can both depend on the network card used and its drivers as well as on the remote side connected. For instance having a hub will always see link status connected eventhough that hub may not be connected to anything else but the power line. In the past some hubs with auto detection/negotiation of the speed and/or crossover or not connection, did have troubles to properly detect certain network cards resulting in a carrier on the network link but no real connection possible. So don't just blindly expect this library to give you everything. This status only tells you that there is a powered network interface attached to your interface and nothing more. If network traffic is possible or not can be and often is a completely different issue. Rolf Kalbermatter
  2. That is not how TCP/IP works. TCP/IP is a state controlled protocol and in order for an error 66 to be reported the TCP/IP stack must go trough the FIN state which is initiated with a FIN, FIN ACK handshaking. Since the connection simply went away there is not something like this. For the local stack the connection is still in an active state although all packet sends and requests timeout, and that is what you get, error 56 timeout. You will have to rethink your aproach but TCP/IP in itself does not guarantee detection of line breaks, only detection and reporting of successful transmission. I think there is some configurable timeout clearing for the TCP/IP stack, where the connection is put into the FIN state automatically after a certain amount of packet requests timeout continously. Rolf Kalbermatter
  3. Sorry Aitor. I remembered from my limited investigations (yes that kind of thing is legal here although using the knowledge to circumvent such protection is not) into the 8.0 license manager that there were two explicit areas that seemed to require a license in order to run. One was the scripting feature that we all know about and the other was something like XNode Development. Knowing scripting I did investigate a bit further into it but not being familiar with XNodes I never went further on that. Maybe they changed the Xnode protection in 8.20 or there are two different aspects about XNodes that are protected differently. I do not know and won't have time to investigate in the near future. Rolf Kalbermatter
  4. I also have to warn that my opinion is not completely unbiased as I have started my LabVIEW carrier as an application engineer at NI and then went to be an alliance member. When I started at NI I was shown this software and some manuals (that were admittingly less complete and much smaller in content than nowadays, but they were printed paper) and then I got a chance to attend a LabVIEW course or two. And those courses really helped a lot. However I have to say that I had previous programming practice in Pascal and a little C so programming in itself wasn't a strange matter to me. My electrical engineering background was very delighted when seeing the LabVIEW diagrams that so much resembled the electrical schemata I had learned to think in earlier so I adopted it quite fast but nevertheless felt that the course really gave me an advantage. It wasn't so much about the programming in itself but about discovering all the little features, editor shortcuts, and tips and tricks that this course gave and also the interaction with the teacher and other students during the course. Later I thought LabVIEW courses myself as an application engineer and also alliance member and I have to say that I still learned a bit during each of those courses. My experience during these courses was that there were two type of people. The ones that knew programming in itself did usually profit a lot more from the course than the ones that had to be thought the basic principles of programming first. Three days is simply not enough to teach someone to understand a whole bunch of programming constructs and something about datatypes and at the same time also have them get familiar with a new software environment such as LabVIEW. But I think that is the same with any software course. I doubt there is a Matlab course that will be useful to anyone that has to be thought the basic principles of mathematics first for instance. The only problem I always felt was that NI likes to market LabVIEW as the tool for non programmers. In my view that is not entirely correct. Without some basic understanding about loops, conditionals, arrays and skalars you simply can't create a good working computer application. The advantage of LabVIEW is that these things are easier to understand and use in LabVIEW for most people since people tend to be more visually oriented than text oriented. Ohh yes, I took the courses in Austin and on a Macintosh since LabVIEW for Windows didn't exist then and there were a few people (not NI people) in the same course that obviously had it even easier than me. They usually had the examples finished before the instructor even asked to start with them. They were attending the class to learn LabVIEW not programming, something which I haven't seen to often over here in Europe later when teaching courses. Rolf Kalbermatter
  5. The idea about FPGA might be interesting here. Earlier versions of LabVIEW did not support (or should I say use) fixed sized arrays although the typedescriptor explicitedly had this feature documented as long as the document about Typedescriptors exists. FPGA was to my knowledge the first environment really needing fixed size arrays so they pushed that. The particular problem you see also seems a bit like pains of the early attempts to get the constant folding optimization into LabVIEW. Not sure if the FPGA Toolkit adds this feature to the LabVIEW environment but I rather think that it is there independant of the existence of the FPGA Toolkit (but can't currently check as I have the FPGA Toolkit also installed). Rolf Kalbermatter
  6. Well I didn't say to turn off all optimizations. Certainly not the ones that are already working fine and in the particular case with 6.0.1 it was not about inplaceness or not. It was about more agressive inplaceness optimization that would completely optimize away bundle/unbundle constructs if combined with certain shift register constructions. The same code had worked fine for several years in previous LabVIEW versions without so much of a hint of performance problems and suddenly blew up in my face. The Queue port was also not such a nice thing but I got easy off there since I didn't use queues much as I had gotten used to create my intelligent USR global buffer VIs for vrtually anything that needed something like a queue functionality too. But I think there is a big difference in bugs introduced through things like constant folding and bugs introduced in new functionality. I can avoid using queues or whatever quite easily but I can hardly avoid using shift registers, loops and basic data structures such as arrays or clusters since they are the fundamental buidling blocks of working in LabVIEW. So if in that basic functionality something suddenly breaks that LabVIEW version is simply not usable for me. The same would be for fundamental editor functionality. Just imagine that dropping any function node on the diagram suddenly crashes on every fourth installed computer somehow. Other bugs can be very annoying but you still can keep working in that LabVIEW version and write impressive applications with this if you need to. While we all would like bug free software I think almost everyone has accepted that that is something that will never really happen before LabVIEW 77 with it's 5th generation AI and environment interfaces with causality influencer. But the basic functionality of LabVIEW 2 should not suddenly break. Rolf Kalbermatter
  7. Well, nothing against German :beer: but the Belgians really have a few of the best ones that I know of. And no I don't say that because he is a collegue Rolf Kalbermatter
  8. Ah I see. Well I for myself still have to do my first project for RT/FPGA in 8.x. 7.1.1 while having some quirks actually still works great for that. Rolf Kalbermatter
  9. Most likely you do something wrong when calling your DLL. There is no way LabVIEW should be able to access memory controlled by your DLL outside of calls to your DLL. The most likely cause of these is actually that you pass a to small buffer to a DLL function that tries to write to that buffer. In C the caller (eg. you as LabVIEW programmer) has to allocate the buffer. LabVIEW can not even guess how big such a pointer should be so you have to tell it. You do that by creating the LabVIEW array or string with functions such as Initialize Array with the necessary size before passing it to the Call Library Node. Most people think that an empty array constant as input is enough since that is how it would work in LabVIEW. But the C function can not dynamically resize the array to the size it would require so it just assumes that the caller has done that already. LabVIEW however can not resize it automatically before passing it to the C function since it has no idea if that array should be 10 bytes or maybe 100 MB. Passing an empty array will basically cause LabVIEW to pass a pointer to a zero length buffer. Now your C function writes into that zero size buffer and overwrites data it should not even look at. If you are lucky you get an Illegal Access Exception when that memory has not yet been allocated to the process at all. More likely however the memory area follwing that pointer is already used by LabVIEW for other purposes including its own diagrams, front panels, management and what else. If you are still a bit lucky you destroy very important information that will soon cause LabVIEW to crash. In the unluckiest case you just overwrite memory that is actually part of your VI definition in memory. Then you do a save et voila you got a corrupted VI on disk that might not be possible to load into memory anymore!!!!! In your case there gets data overwritten that seems not very important but renders some pointers invalid. At the end when LabVIEW tries to properly deallocate all that it has allocated before, it stumbles over these invalid pointers and crashes. Killing your app only avoids the symptome but doesn't cure the cause. So if you get strange crashes check if you use DLLs anywhere. If you do and those DLLs came not with LabVIEW itself you should get very cautious. Stress test them as much as you can with the setup as is used in your application. You may be using a time bomb in your application!! It may seem harmless now but seemingly small changes to the app might cause the corruption to be moved into much more sensitive areas and there your app crashs consistently somewhere seemingly unrelated because you added this single button to that nice user interface and post here that LabVIEW crashed because of adding a simple standard button to your VI. Rolf Kalbermatter
  10. class is a C++ only thing and therefore will never work with the Call Library Node. With wrapper I meant to write a standard C function for each method you want to call in your class. Probably something like following but my C++ is very rusty and not really good. #ifdef __cpluscplus extern "C" { #endif int FirstMethod(int arg1, int arg2); ...... #ifdef __cpluscplus } #endif static My_Class mc; int FirstMethod(int arg1, int arg2) { return mc->FirstMethod(arg1, arg2); } etc...... You can do the same for dynamic classes but then you will have to pass the object pointer as extra parameter in your wrapper function and you also need to create extra functions to create and dispose the object pointer. Rolf Kalbermatter
  11. There is a little problem with this optimization. As long as it works sometimes and NEVER creates wrong results I don't care. But If I create a VI that does something specific and logical and the result that comes out is simply completely off track, I'm getting very pissed. This has been with certain optimizations in shift register handling in the obnoxious 6.0.1 version and other versions before and after and this whole constant folding again has caused quite a bit of throubles. The difficulty simply is: you do not expect LabVIEW to calculate 1 + 1 = 3 and when you get such a result you are searching sometimes hours, questioning your sanity before you throw the towel and decide that it really is a stupid LabVIEW bug. I can live with LabVIEW editor bugs or not always correctly working new features but I certainly don't accept LabVIEW to create completely wrong code that has worked for several versions before. As such I do not want constant folding unless I can rely on it to not cause the compiler to create wrong results. If I need optimization I can think about the algorithme myself and find a variant that is quite likely just as fast or even better than what LabVIEW possibly could come up with from a different suboptimal algorithme. My stance here has been and always will be: I rather have suboptimal and possibly even slow code generated that produces correct calculations than hyper fast code that calculates into the mist. The only exception to this might be if the miscalculation would be to my advantage on my bank account Rolf Kalbermatter But in this case the bad programmer is not the one USING LabVIEW. I know how hard optimization is but still I would rather have a choice in this than having to start to doubt LabVIEW itself every time a result does not match my expectations. And to be honest I have this choice by still using LabVIEW 7.1.1 for basically all of my real work. Rolf Kalbermatter
  12. It is definitely a bug. Since the Flattened data output string is not the same this will cause problems. There are many cases where Flatten/Unflatten are just used to get the data into a stream format and back and the context alone is enough to determine what data has been flattened so that parsing the typedescriptor, which until a few versions of LabVIEW was documented but no official VIs for this parsing were available, was absolutely unnecessary. And someone at NI obviously thought that the typedescriptor was superfluos too (I definitely don't agree but have no influence on that) otherwise they wouldn't have removed it in LabVIEW 8, would they. Rolf Kalbermatter
  13. While I agree that this is not really the way to deal with passwords I wonder if there is a real problem. Are you going to distribute the lvproj file too with your VI's? Not sure why you would want to. Rolf Kalbermatter
  14. My solution was to install them in a specific LabVIEW version and then copy all the files from this version to all other LabVIEW installations manually. Is a bit of work but much less than this stupid install, rename, uninstall carussel. Rolf Kalbermatter
  15. Actually not necessarily. This is a behaviour that also occurres in C, at least the compilers I know of and it has its uses when you read in data from a stream in a certain format but later want to reinterpret some of the data. I know for sure a few cases where I have relied on this fact and changing that now would certainly break lots of peoples VIs. Rolf Kalbermatter
  16. Well it's not the use people here are interested ;-). It's how you made them! As far as I know there are only two possibilies: - You got a license from NI somehow (and have signed an NDA or someone has on your behalf) and then posting this here could get you in trouble. - You hacked LabVIEW to not do a license check anymore or something like that and then you are also in trouble in at least certain countries on this globe who think that even thinking about circumventing anti-copy protection is a major crime. Rolf Kalbermatter
  17. What do you mean by a static class? Standard C has to the best of my knowledge nothing with that name and it rather sounds like a static C++ class you are talking about. And the Call Libary Node can not deal with anything that is C++ related. What you probably could do however is writing standard C function wrappers to all the methods of your static class and export them from your DLL. Rolf Kalbermatter
  18. That is not a theory but a fact Rolf Kalbermatter
  19. The __export is the Borland way of hinting to the linker that the function needs to be exported from the DLL. MS Visual C uses for that __declspec(dllexport). Another way of doing this is to create a .def file. But all of these things are highly compiler specific. The #ifdef __cplusplus extern "C" { #endif #ifdef __cplusplus } #endif are to tell the compiler to use standard C function name handling (disable C++ name mangling). You need to put these around the function prototype declarations of all functions that get exported from the DLL. Typically this is done at the begining and end of the header file(s) that define the exported function prototypes but for simple DLLs where you have only the C source file you can also put it around the function implementation itself. Please note that C++ name mangling is also compiler specific so if you want to create a library or DLL to be used in other compilers than the one that was used to create it disabling name mangling for exported symbols is usually a good idea. I'm sure 5 minutes of smart googling would have given you these answers too. Rolf Kalbermatter
  20. While reentrancy is used here in both cases it is not exactly the same thing. Reentrancy for functions means just that they are basically protected from modifying parts that could collide if the same function is called again even before the first call has finished, including recursive calls. Reentrancy for VIs is a bit more complicated. For a start it means the same as for functions but in order to do that LabVIEW does quite a bit of extra resource handling. And this resource handling is quite expensive. Making all your VIs reentrant without need can make the difference between an application that uses a few MB of memory and one that uses 100s MB of memory, without being faster and possibly even getting slower due to vast memory usage. Reentrant VIs should only be made if a VI can have some lengthy operation that might actually wait (block on some external event or such) and should be able to be invoked multiple times in parallel. A VI that does execute fast or does not really idle in some ways for some time almost always will cause the performance of the application to be lower when it is set to reentrant execution. Another use of reentrancy for VIs is the special case of uninitialized shift register maintaining instance specific data storage instead of the application global data storage for normal VIs. Rolf Kalbermatter
  21. This is really all about TCP IP routing tables and how they are setup. In your case using two exclusive subnet ranges the router has little problems to decide where a packet should go. You can easily influence the most simple routing by setting up according default geateway addresses for each interface so that addresses that do not match one of your subnets are all forwarded to a specific gateway such as a network router (or DSL modem). However once you have two different subnets that both can be default for certain adress ranges, not being the actual subnet itself, you won't be able to get things working right without explicite router tables. Setting up router tables in corporate network routers is a common task for network administrators but doing so in workstations is seldom done and in the case of Windows usally something to do on the command line. http://support.microsoft.com/kb/140859 might be a good starting point for that. Rolf Kalbermatter
  22. SubVersion stores all file differences using a binary diff algorithme. And no, most diff utilities would not work when only getting the diff part from the repository. They usually need the full file for both the original and the modified version. Your guess that it only seems to return the binary diff part is quite likely correct. Why this is I'm not sure. It might indeed be a difficulty in setting up PushOK correctly to request the correct diffing result but I'm absolutely sure that this is configurable somewhere. Not using PushOK however I can be not of much help here. Rolf Kalbermatter
  23. Sounds like a data corruption of your VIs and one I certainly haven't seen myself so far and most other LabVIEW users neither. So I really suspect something that is particular to your setup. Possible causes might be bad RAM memory or other hardware compenents such as slowly failing HDs. Another possibility which some people seem to underestimate is the fact that external code you are calling is completely free to overwrite memory that might be vital to LabVIEW and when you save back your VIs you might happen to save corrupted data. Unless you use self made external code (Call Library Nodes or CINs) or such things from not to official sources I would strongly suspect the hardware to be at fault. Rolf Kalbermatter
  24. With LabVIEW 8 and 7.x too things are not as easy anymore. There is a lot that gets installed and a lot if it might be sometimes redundant but other times it can be absolutely crucial to your application. I guess NI had the choice to make this fully configurable with a 20 page dependancy chart that tells you which VI functionality requries which external module and which external module again depends on other external modules or create a more catch all installer system. The first choice would have caused a lot of documentation work, additional testing and such and prompted 90% of the LabVIEW users to complain that creating a LabVIEW executable installer is a pain in the a**. The other solution is simpler to deal with, needs less documentation, is MUCH easier to support for technical support, and costs a bit of media space. But with 500GB harddisks and 4.5GB DVD media currently that is a concern a lot of users don't even think about. As to VISA: No VISA32.dll is only a wrapper interface. The actual interfaces for the different VISA IO resources are in different other DLLs that need to be registered correctly for VISA32.dll to be found and there is at least one more VISA runtime support DLL that needs to be installed too. Also this does not include the lower level drivers such as NI-488, NI-Serial, NI-PXI etc etc which need to be installed too if you happen to use them and are usually part of a default NI-VISA installation too. And in order for you to configure the actual VISA resources at all you will want to have Measurement & Automation Explorer installed too, which is a rather big beast with quite some more dependancies. Rolf Kalbermatter
  25. Hmm, I know having seen VISA returning me interface specific information. If I only knew where! I guess if you really want to go the Windows API route you will have to check into SetupAPI (no this is not pretty and an external DLL will most likely help faciliate that significantly) or you could try to look in the registry. A starting point here might be http://msdn2.microsoft.com/en-us/library/aa909922.aspx. Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.