Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,786
  • Joined

  • Last visited

  • Days Won

    244

Everything posted by Rolf Kalbermatter

  1. QUOTE (ragu @ Feb 3 2009, 12:10 AM) This is not a simple topic. The right answers for you will depend on how much you already know about LabVIEW, C programming, OS details etc. Also the information is out there. There are several good Knowledge Base or Application Note articles on www.ni.com that have lots of details in these respects. Some of them go to details you will only want to know if you have a really sound understanding of how memory management is done in C programs. The WIKI here on LAVA has also some good information which is not as in depth but should be good for an average LabVIEW programmer. Go to events like LabVIEW days, or User Group Meetings. There are often presentation about advanced topics with LabVIEW such as how to make a good performing application versus a bad one which is also dependent on some basic understanding about LabVIEW memory management. Go and search for this info and come back when you have more specific questions. We can not give you a 10 hour lecture about this topic here and it would not give you all the information that is possible anyhow. Rolf Kalbermatter
  2. QUOTE (Mark Yedinak @ Jan 21 2009, 11:36 AM) If you are talking about the MS RPC protocol you may want to think again about that approach. This is not just a simple protocol but the full blown DCE RPC specification with MS proprietary extensions. The various network protocol layers that this all can be embedded in each other, and the various security providers that the different layers can use including several encryption layers will likely make this a never ending project to be implemented in LabVIEW. As Mark has explained RPC is just a term used for anything that can allow remote execution of code through some kind of network connection. In that sense even VI Server is an RPC protocol. So you really need to find out the dialect of RPC you are talking about. Rolf Kalbermatter
  3. QUOTE (bsvingen @ Jan 18 2009, 04:55 PM) The real reason why you need to compile the wrapper in C++ is that only C++ can resolve the C++ typical things about how to call object methods and access public object variables. A class is in fact a sort of structure but how these things are layed out in memory and organized is something very C++ specific a C compiler can't deal in easily. There are exceptions to this such as COM, where even the binary layout of the C++ object is strictly defined so that given proper C accessor macros you can actually access a COM object both from C and C++, but that is a specific implementation of object classes in an attempt to make them compiler independent. The problem of mutations of the C++ object and the change in possible methods is solved there with the introduction of interface identifiers one has to query with the IUnknown interface and once you have released an interface you can not ever make changes to it but instead need to define a new interface and make that available too through the IUnknown QueryInterface method. For normally defined C++ object classes there is no real standard at all how the objects have to be organized in memory and how various things like virtual methods, inheritance and such are implemented on a binary level. This also makes it very difficult to call C++ link libs from a different compiler than the one who created them. I usually just define a opaque pointer in the wrapper, either void * or something like: struct <name>_t; typedef struct <name>_t *<name> The second does make it a bit more clear in the according function prototype what is meant to happen and requires an explicit cast in the call of any object method or public variable access but I wouldn't see whyyou would have to do any modifications on the original C++ source code at all. As far as LabVIEWs Call library Node is concerned those object pointers are just an uInt32 for me and in LabVIEW 8.6 you would even use the Unsigned Pointer-sized Integer, which will take care to keep your Call Library Node correct if you ever happen to go to a LabVIEW for 64 Bits OS and port your C++ library and wrapper to 64 bit too. Rolf Kalbermatter
  4. QUOTE (Dave Graybeal @ Jan 16 2009, 03:39 PM) Someone did bork up LabVIEW 8.6 mouse click position detection in various cases very badly. Seems a bit similar to the problem when you do have a tab control and try to place a free label on it. The location where it will be dropped is usually very different than where you clicked. I found the actual offset is in fact dependant on the distance of the top left corner of the tab control to the 0/0 point of the front panel/pane. And I recently read about another similar mouse click offset problem somewhere. Rolf Kalbermatter
  5. QUOTE (Poom-Thailand @ Jan 17 2009, 05:05 AM) If you have installed the Vision Development Toolkit from the LabVIEW DVD go into Add/Remove Programs Control Panel and do a repair of your install. If you haven't yet done so then install that package. Rolf Kalbermatter
  6. QUOTE (sachsm @ Jan 14 2009, 07:44 PM) I did look at the VI Server remote protocol some time ago (around LabVIEW 7) to the point were I had an almost working LabVIEW VI library communicating to a VI server and that is what I found: 1) All input terminal data is packed into a single packet. Same about the output terminal data that is sent back. 2) Some header, with data length information about the packet and a packet type identifier and then the data mostly packed using the LabVIEW flattened stream format. 3) No real idea. I would guess that the VI server was at least back then running in the UI thread. It may not be anymore but that has all kinds of implications with data handling towards the rest of a VI diagram. 4) client requests some action using a single packet where all necessary data is included and server answers with another packet with all relevant data in. So it is basically one packet per VI server transaction in each direction, with VI server transactions being Open VI Reference, Read/Write Attributes, Execute Method, Call By Reference, and Close Reference. So I think the VI Server protocol is about as efficient as it can be. The only problem I had with using VI server on embedded systems was with an old Fieldpoint Network Module recently. Had hoped to take a shortcut by simply enabling the VI server protocol on it and just execute VIs remotely to interact with the controller from the host program. However that did kill the performance of the FP controller so much that even if I did not actively interact with the VI server my main control program on there was starting to lag badly. Solution was to implement some extra methods in the already existing TCP/IP server that I had programmed in that application for reading and writing data points from and to the tag engine on it. After that and disabling VI Server the application on the controller behaved fine. And before you ask: No the VI Library I have worked on is not available for review. Rolf Kalbermatter
  7. QUOTE (sachsm @ Jan 14 2009, 07:44 PM) I did look at the VI Server remote protocol some time ago (around LabVIEW 7) to the point were I had an almost working LabVIEW VI library communicating to a VI server and that is what I found: 1) All input terminal data is packed into a single packet. Same about the output terminal data that is sent back. 2) Some header, with data length information about the packet and a packet type identifier and then the data mostly packed using the LabVIEW flattened stream format. 3) No real idea. I would guess that the VI server was at least back then running in the UI thread. It may not be anymore but that has all kinds of implications with data handling towards the rest of a VI diagram. 4) client requests some action using a single packet where all necessary data is included and server answers with another packet with all relevant data in. So it is basically one packet per VI server transaction in each direction, with VI server transactions being Open VI Reference, Read/Write Attributes, Execute Method, Call By Reference, and Close Reference. So I think the VI Server protocol is about as efficient as it can be. The only problem I had with using VI server on embedded systems was with an old Fieldpoint Network Module recently. Had hoped to take a shortcut by simply enabling the VI server protocol on it and just execute VIs remotely to interact with the controller from the host program. However that did kill the performance of the FP controller so much that even if I did not actively interact with the VI server my main control program on there was starting to lag badly. Solution was to implement some extra methods in the already existing TCP/IP server that I had programmed in that application for reading and writing data points from and to the tag engine on it. After that and disabling VI Server the application on the controller behaved fine. And before you ask: No the VI Library I have worked on is not available for review. Rolf Kalbermatter
  8. QUOTE (jdunham @ Jan 14 2009, 11:21 AM) Actually you can limit both the VIs that will be accessible over remote VI server connections as well as the IP addresses that can connect to the server. The syntax for the keys is not documented nor easily described in full, but configuring everything in the LabVIEW options and copying the keys over to a built applications ini file is best. QUOTE I don't think LV scripting will work through this connection. I don't think you could use this connection to build new VIs on the target system which could do very bad things like list or read or delete the contents of your hard drive without specific advance knowledge of VIs existing on the remote system. (That's an interesting discussion on its own, especially if I am wrong). There are a lot of VI server operations (including almost all scripting) that is not even present in a built application. Many other methods and properties are also not accessible at all. And with above settings you can tighten it up so much that there is really not much an attacker can do other then sending it malformed packages to try to make it crash. Rolf Kalbermatter
  9. QUOTE (jdunham @ Jan 14 2009, 11:21 AM) Actually you can limit both the VIs that will be accessible over remote VI server connections as well as the IP addresses that can connect to the server. The syntax for the keys is not documented nor easily described in full, but configuring everything in the LabVIEW options and copying the keys over to a built applications ini file is best. QUOTE I don't think LV scripting will work through this connection. I don't think you could use this connection to build new VIs on the target system which could do very bad things like list or read or delete the contents of your hard drive without specific advance knowledge of VIs existing on the remote system. (That's an interesting discussion on its own, especially if I am wrong). There are a lot of VI server operations (including almost all scripting) that is not even present in a built application. Many other methods and properties are also not accessible at all. And with above settings you can tighten it up so much that there is really not much an attacker can do other then sending it malformed packages to try to make it crash. Rolf Kalbermatter
  10. QUOTE (Dan DeFriese @ Jan 14 2009, 03:02 PM) In LabVIEW 8.5 you can configure the Call Library node to get a path that receives the DLL to call at runtime. Before LabVIEW 8.5 you have indeed to encapsulate all calls to DLLs that might be not available into VIs that get called dynamically through VI server. You can load the VI with an Open Instrument Node and either check its execution state or directly try to call it through the Call By Reference Node. Call by Reference will return an error if the VI was not executable (which it will not be if it could not resolve a DLL call). Rolf Kalbermatter
  11. QUOTE (Dan DeFriese @ Jan 14 2009, 03:02 PM) In LabVIEW 8.5 you can configure the Call Library node to get a path that receives the DLL to call at runtime. Before LabVIEW 8.5 you have indeed to encapsulate all calls to DLLs that might be not available into VIs that get called dynamically through VI server. You can load the VI with an Open Instrument Node and either check its execution state or directly try to call it through the Call By Reference Node. Call by Reference will return an error if the VI was not executable (which it will not be if it could not resolve a DLL call). Rolf Kalbermatter
  12. QUOTE (shoneill @ Jan 14 2009, 04:55 AM) I think he is either amusing himself very much here or completely beyond any help. Rolf Kalbermatter
  13. QUOTE (Antoine Châlons @ Jan 14 2009, 03:31 AM) Hmm didn't encounter that so far. I would expect the upgrade process to take care of that. It does with just about anything else with the immensely reworked File IO functions. Writing new code is of course another beast if you are used to the old default value Rolf Kalbermatter
  14. QUOTE (OlivierL @ Jan 13 2009, 12:16 PM) Build array can not cause a memory leak in the true sense of the word. A memory leak means really that memory got allocated and the reference got lost somehow without the memory being freed. The unlimited Build Array function is really just a memory hog meaning it will accumulate memory over and over without ever releasing it. It is not a memory leak in the sense that LabVIEW very well knows about it and will eventually release it if you unload the VI containing that huge accumulated array. QUOTE (jdunham @ Jan 8 2009, 05:52 PM) No, there are plenty of things the profiler won't report. I don't believe that shows global memory like queues and notifiers or stuff allocated by ActiveX nodes that you've invoked. Not sure about queues really. It might not track them. For memory allocated in external components like ActiveX servers or even DLLs the Profiler has no way of tracking that down really as it is not allocated under its control. Rolf Kalbermatter
  15. QUOTE (Variant @ Jan 13 2009, 06:38 AM) Can you be a bit more clear with your question? I for one still wonder what you meant by this sentence. Rolf Kalbermatter
  16. QUOTE (Maca @ Jan 13 2009, 01:43 AM) They would be violating copyright and their license if they did. And if he has an older PID Toolkit it should still work in 8.5. It used to be simply a collection of VIs in older versions with diagrams intact so 8.5 can read it without problem. Rolf Kalbermatter
  17. QUOTE (jgcode @ Jan 9 2009, 03:02 AM) It's a Beta after all! File a bug report at MS QUOTE (Jim Kring @ Jan 12 2009, 09:37 PM) I wonder if Vista will just be like Windows ME and we can all just agree to forget about it, eventually. I think that is what will happen eventually. However I don't think they can avoid things like UAC in Windows 7 but probably implement it a bit saner so that you won't have to click several times through a password dialog to simply install a software. Windows 7 seems to be supposed to be the rigorously redesigned OS that Vista should have been, before they found out that doing that is going to take so long that they won't have a successor to XP for maybe 8 years but needed to get something out sooner as they were already way behind the 3 year release schedule for a new Windows version. Rolf Kalbermatter
  18. QUOTE (dmpizzle @ Jan 9 2009, 04:28 PM) The LabVIEW Text Write function node has a mode selection (right click pop-up menu) called "Convert EOL". When activated the function will convert each instance of LF, CR, or LF/CR into the end of line identifier for the current platform. Each platform has its own native EOL with Mac using a CR, Unix a LF and Windows CR/LF. And I think that LabVIEW for Mac OS X still uses the CR as it is really a Carbon application despite the underlying Unix kernel. When you want to write a text with specific EOL you have to format it accordingly and disable above mentioned option in the Write Text function. Rolf Kalbermatter
  19. QUOTE (Antoine Châlons @ Jan 10 2009, 06:12 AM) Option 3 was somewhere documented in old LabVIEW days (LabVIEW 4 or 5) but it seems it got somehow lost since, although it still works. Rolf Kalbermatter
  20. QUOTE (mattdl68 @ Jan 8 2009, 05:39 PM) This is in the installed online help of LabVIEW for quite some time at least since 8.0 although with some small modifications in each version to adapt to changes and new features of the Call Library Node configuration. LabVIEW Help Menu->Search the LabVIEW Help: Index Tab: enter call library This will eventually get you to a topic under: Fundamentals->Calling Code Written in Text Based Languages->Concepts->Calling Shared Libraries (DLLs) It is about the meaning of the options in the configuration of the Call Library Node. But it won't help you with your previous question, how to find out in which order and with what parameters to call a particular API. Of course it is actually about calling an API that is typically written in C and the whole shared library concept is based on C too, so understanding at least the basic principles of C datatypes and their handling is really a prerequisite to be able to even understand what all those options and things mean. But that is not LabVIEW's fault, other than that it does not require you to know much about those things for programming some simple LabVIEW only VIs. For complexer programming it is even in LabVIEW more or less doomed, if you do not have some good basic programming knowledge, although the more C specific things you don't really need to know unless you want to venture into the Call Library Node or God forbid the Code Interface Node. Rolf Kalbermatter
  21. QUOTE (Irene_he @ Jan 8 2009, 12:07 AM) No I think you are right. Except the person knowing 20% does not know that percentage . That person likely thinks to know almost anything he or she would need to do the task only to find out later that there was about 4 to 5 times as much to learn before they could actually succeed. The problem is that some people get discouraged half way through it and then abandon the idea. I have some of those projects too. Another quite common thing is that the last 10% of the work require 90% of the time. That is where most people stop. Rolf Kalbermatter QUOTE (mattdl68 @ Jan 7 2009, 08:44 PM) Thanks rolfk the link above will give you some history to my nightmare.......lol. What is confusing to me,is how to find the order of the dll's and functions to get what I need. There has to be some documentation out there that would give you some idea of the order in which to call functions.........one would think....lol I would think when trying to find USB devices using C++ they would need the order as well. Of course! And except from examples in the SDKs (and DDKs) and sometimes from Open Source projects you can investigate how certain things are done, that is where the creative art of programming starts. With trial and error, combinatorial logic, experience with certain types of APIs (if you have programmed WINAPI application, writing a MacOS X application can seem a really unlogical way of doing things and vice versa) and some sort of magic (I have often tinkered one or more days about how to get something to work, only to wake up one morning and having suddenly an idea that turned out to be the start of a solution), you go step by step about programming something. Programming in LabVIEW itself is not that much different but its on a much higher and more abstract level than tinkering with system APIs. And there is really no magic LabVIEW could employ to make working on system API level as easy as working on LabVIEW diagram level besides of having people like the NI elves and other third party developers write impressive intermediate level software such as DAQmx to make the work for the LabVIEW programmer more bearable. It's about the way your brain cells work and have been trained. On the same level, I've seen several brilliant system programmers cringe and falter at an attempt to produce a more than trivial LabVIEW application. Rolf Kalbermatter
  22. QUOTE (jdunham @ Jan 7 2009, 12:45 PM) They never can be conflicting. TCP/IP mandates that at least one of these (TCP source address, TCP target address, TCP source port or TCP target port) must be different for any two connections. Otherwise the routing engine would get utterly confused. Since the listen socket (the device) uses a single address and port and the connections come from the same computer, the connection must be opened on different ports. Yes LabVIEW TCP Open Connection uses auto port selection and always uses a currently non used port for that. You can define a specific port to open but if that port is already in use TCP Open Connection will fail. And if the device allows multiple connections it will receive an individual socket for each incoming connection so it can easily determine from which source a specific command comes. If it can't handle multiple connection it will normally simply not accept new connections and the second and any further connection attempt will fail with a timeout. It could be also that the device principially does accept multiple connections but its execution engine is buggy in the sense that it can not handle multiple connections without creating a mess. This would be a design problem of the device. Rolf Kalbermatter
  23. QUOTE (Val Brown @ Jan 3 2009, 05:05 PM) Well, theoretically. In practice the Windows API is sometimes very complex and the most difficult part is that it usually pulls in a very large part of the Windows header files and some of them do contain C preprocessor defines and C language constructs where the DLL Wizard parser simply chokes on. I've been writing a C header file parser in the past using LabVIEW and it was a major pain and still is not doing as much as the DLL Wizard parser can handle. So even if the actual function you try to import is not using complicated parameters it is quite possible to fail due to the pulling in of other Windows API headers that are undigestable for the parser. And if it is using complicated parameters then it will fail a lot of times anyhow. QUOTE (mattdl68 @ Jan 3 2009, 03:38 PM) Hi Irene. Thanks for the info. I looked into doing a wrapper. But i don't Know c that well. I was trying also to use the VISA develoment wizzard. The wizzrd had errors trying to setup the setapi.dll. The VISA wizard is not about helping you call DLLs but about creating inf files for accessing VISA devices including USB devices. However for USB devices they should not be handled by an already installed or standard Windows driver as then VISA can not gain access to it. You probably meant the Shared Libary Import Wizard and here you obviously need to point it to the right header file. In the case of the SetupDi... functions this would be the SetupAPI.h. Functions not defined in there but being exported by the SetupAPI.DLL (all the CMP_ and CM_ functions in your other screen shot) will of course cause errors but should not affect the SetupDi.. functions. However it all could still fail due to unparsable constructs in the SetupAPI.h file or in other depending header files. QUOTE Correct me if i"m wrong, even using c++ you need to know thr order in which to call the functions. For example...say i want to find all devices currently connected to the USB? What is the order of doing that? i'm assuming there are some outputs of some functions that feed the inputs of other funtions. Is ther some documation for this? Yes MSDN and the Windows Platform SDK. MSDN is mostly about the function prototype and documentation and the Platform SDK (PSDK) contains both the headers and C code samples how these functions can be used. QUOTE the SetupDiGetClassDevs asked for a flag, Dword. DIGCF_ALLCLASSES is one of the fags. Is this a string? The flag ClassGuid is it a key which looks in the Ini file supplied by manufactuer. To get the device id (number) but seeing for know I what all devices connected. This should be set to null. So would I input a string constat(null) or a 0(U32) Since the flag is a DWORD it must be a integer number or bit flag. What numeric value corresponds to the Preprocessor symbol DIGCF_ALLCLASSES you can sometimes lookup in MSDN and otherwise you have to search in the headers of the PSDK for it. A GUID is a structure of 16 Bytes length as you seem to have been told already. This and anything else about datatypes can be also found on MSDN or in the headers of the PSDK. The function expects a pointer to that. If you need to pass here a NULL value you would configure it as uInt32 (or (Un)Signed Pointer-sized Integer in LabVIEW >= 8.6 which will take care about adjusting for the different pointer size if you ever move to LabVIEW for Windows 64 bit) and then pass in a 0 integer. QUOTE (Val Brown @ Jan 3 2009, 08:14 PM) It is impressive to me how frequently that Wizard simply fails. Welcome to DLL Hell. Ever written a C compiler? Try and you will understand! I admire the courage of the person not only attempting the task of creating the DLL Wizard (I've written a somewhat less exhaustive C header parser in LabVIEW for another project) but also adding it to the released version of LabVIEW. How it got past the release manager I'm not sure but that person probably had no idea about how difficult a good C parser is, so he or she believed it to work fine after being shown some successful imports from not to complicated APIs . Since it will be almost impossible to remove that tool now, it will stay there but I'm not sure there will be enough support for it once the father of this project moves on to other tasks. Rolf Kalbermatter
  24. QUOTE (Cool-LV @ Jan 5 2009, 07:54 PM) You can get that lib from the aformentioned code capture tool. The clipboard library is only a utility to the bitmap library functionality so said example should actually have been put in the bitmap lib instead of into the clipboard lib. Ahh well, if everything would be perfect! But then it comes for free, doesn't it? Rolf Kalbermatter
  25. QUOTE (Cool-LV @ Jan 5 2009, 03:51 AM) Here is the clipbrd.llb file from the Code Capture Tool enhanced with three functions to handle CF_HDROP. LabVIEW 6.0 format http://lavag.org/old_files/post-349-1231192068.llb'>Download File:post-349-1231192068.llb Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.