Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,892
  • Joined

  • Last visited

  • Days Won

    267

Everything posted by Rolf Kalbermatter

  1. QUOTE (Fubu @ Apr 9 2008, 11:46 PM) There is probably no way around some external code interfacing through the Call Library Node and possibly even wrapping something up in an external C wrapper DLL. Possible pointers could be libusb, an Open source C library originally from Unix to communicate with USB devices and usbhidioc, a C source code example how to access HID devices in Windows. Looking for these two search terms in Google should bring you some good pages. Although not many of them with ready made LabVIEW solutions. Rolf Kalbermatter
  2. QUOTE (tmot @ Apr 7 2008, 10:01 AM) If it has a DirectX (DirectShow) compatible driver you could try to download the IMAQ for USB Webcam driver from the NI site. It is free but unsupported and altough it is for USB webcams, the DirectX API can also be used for video frame grabber cards. Not sure if NI might filter the available acquisition filters to USB specifically but it is at least a try. Failing that I do think going with an NI card would be definitely the fastest solution in terms of time to get this working. Rolf Kalbermatter
  3. QUOTE (rolfk @ Apr 7 2008, 03:30 AM) There is actually one other aspect here that is important. While C and I do believe C++ will use the smallest integer that can hold the biggest enum value, there is also something called padding. This means skalar elements inside a struct will be aligned to a multiple of the element data size or the data alignment specified through a #pragma statement or passed to the C compiler as parameter, whichever is smaller. So in the case of above enum type which would result in an int8 and following structure struct { enum one_three elm; float something; } "something" will be aligned to a 32 bit boundary with all modern C compilers when using the default alignment (usually 8 bytes). So the C compiler will in fact create a struct containing an 8 bit integer, 3 padding filler bytes and then a 32 bit float. Treating the enum as int32 in that case will be only correct if the memory was first initialized to be all 0 before the (external code) filled in the values and also only on little endian machines (Intel x86). Rolf Kalbermatter
  4. QUOTE (george seifert @ Apr 7 2008, 07:53 AM) Yes treating it as array of int32 of double the size should work quite well. You can then typecast that back into an array of your cluster type although you may have to byteswap and word swap the whole array first to correct for endianess issues. Or maybe just swap the bytes and words of the integer part. That is always something best seen in trial and error. Why it seemed to work for smaller arrays is probably because the DLL was in fact writing the first enum valu into the int32 that tells LabVIEW how many elements are in the array. As such you should have seen a swapping of the float and enum in comparison to what the VB code would indicate. With smaller arrays the overwriting did not cause to bad problems but with longer arrays it somehow set of a trigger. Rolf Kalbermatter
  5. QUOTE (PaulG. @ Apr 3 2008, 10:50 AM) No no! But it helps to unload everything you learned for C programming when starting with LabVIEW. The only thing worse to learn LabVIEW are Basic programmers. I for one started with Pascal, then learned LabVIEW and found it a God send, and after that only learned C. And there are simply areas where C is more appropriate than LabVIEW. But I would never code an UI in anything but LabVIEW. Rolf Kalbermaltter
  6. QUOTE (orko @ Apr 4 2008, 03:48 PM) Don't know that brand but yes come on with it :beer: Rolf Kalbermatter
  7. QUOTE (Aristos Queue @ Apr 4 2008, 02:38 PM) Actually Standard C uses normally the smallest integer that can contain the highest valued enum. Maybe C++ changed that in favor for the int dataype. So typedef enum { zero, one, two, three }; will usually be an int8 To force a specific int size one often defines a dummy value: typedef enum { zero, one, two, three, maxsize = 66000 }; will make sure it is an int32 Rolf Kalbermatter
  8. QUOTE (george seifert @ Apr 4 2008, 11:08 AM) I doubt very highly that your DLL is understanding LabVIEW datatypes. That is however what it is going to see if you use Adapt to Type. With that you tell LabVIEW to pass its array just as it is in memory, which will be a LabVIEW data handle and not an array data pointer. Since it is an array of struct there is no trivial way to make LabVIEW pass it as a pointer. You will have to typecast the cluster array to a byte array (selecting little endian) then pass it as an array data pointer and on return decode the bytestream. There is really no other way other than writing a wrapper DLL in C doing that translation for you in C. Rolf Kalbermatter
  9. QUOTE (orko @ Apr 4 2008, 03:34 PM) No, no!!! There are so many delicious cookies! Rolf Kalbermatter
  10. QUOTE (TobyD @ Apr 4 2008, 02:26 PM) It is a long shot :-). I think the problem might be more related to the fact that he is using LV 7.1 or lower according to his list and that DS had some issues about closing sessions properly somehow in earlier days. My memory is all fuzzy about this and it could also have been something in the DS connection of front panel controls and not sure if it was LabVIEW 6.x, 7.0 or 7.1 but there were definitly some issues. However that's so long ago I couldn't remember the details anymore, especially since I never used DS myself. Maybe also check the error cluster too. It could be that DS Read returns an error despite returning data and that DS Close does not close then which would be a bug, but it has happened in the past that some Close functions didn't execute if the error input was set to indicate an error. Rolf Kalbermatter
  11. QUOTE (tcplomp @ Apr 2 2008, 02:22 PM) Yes but it's a clutch and the symptoms clearly point to a connection refnum not explicitedly closed. Unloading the VI or LabVIEW altogether will close that connection, but closing it yourself explicitedly is definitly the right course of action. Rolf Kalbermatter
  12. QUOTE (Justin Goeres @ Apr 2 2008, 05:47 PM) You are able to configure LabVIEW to use a different user.lib path and upcomgin versions of LabVIEW while not going to do away with the standard LabVIEW internal user.lib will likely add another user.lib in your user profile directory and those two will be merged on startup. Rolf Kalbermatter
  13. QUOTE (Jim Kring @ Apr 2 2008, 02:57 PM) I think your reasoning is way to general. There are functions that might benefit reasonably well from subroutine priority but many others that will see little benefit in typical applications. The first could be some of those little helper functions such as Valid Path. The latter would be for instance things like Delete Recursive and such. It's all about is it a function that takes always little time to execute and is likely to be called inside loops many many times. If not the advantage of a little faster execution is IMHO not at par with the disadvantage of causing possible problems that might be also hard to debug since debugging of subroutine VIs is not possible at all. In general speed of an application is not lost in the calling overhead of VIs at all but in the type of algorithme used and even if calling overhead of VIs can add some significant performance loss it is probably about much less than 5% of the VIs that can significantly add to the performance by reducing their calling overhead. Penalizing the other 95 to 99% of the VIs for that is not a good option for me. Rolf Kalbermatter
  14. QUOTE (pallen @ Apr 2 2008, 10:39 AM) Yes if you run the application hours and hours they will eventually be called million of times but with very very often I meant millions of times in short time (seconds). Anything in the context of UI should not be optimized in terms of microseconds but rather in the way it is done (more optimal algorithme to prepare data, avoid huge memory copies, defer panel updates during bulk UI updates, etc). Rolf Kalbermatter
  15. QUOTE (pallen @ Apr 2 2008, 08:25 AM) I don't think they qualify for the "very very often called" item. At least not as I design them. If there is an operation that would require that functional global being called millions of times inside a loop I usually create a new method that does that particular operation inside that functional global instead. That should take care of that. Rolf Kalbermatter
  16. QUOTE (Jim Kring @ Apr 1 2008, 04:29 PM) Well it's all relative. Now with LabVIEW being always multithreading even inside a single execution system the negative effect of subroutine VIs is not as dramatic as it used to be in old single thread LabVIEW days. At that time subroutine priority was specifically reserved for small VIs that could be relatively quickly executed. LabVIEW optimized the calling context in such a way that there was very little overhead in calling a subVI similar to if the VIs diagram would have been directly embedded in the caller. This could lead to rather huge speed improvements because the calling overhead and the chance for memory copies could be greatly reduced. At the same time while a subroutine was busy NOTHING else in LabVIEW could be going on, since that subroutine exclusively blocked the one single thread LabVIEW had at that time. So if you did that to a lengthy function, LabVIEW could seemingly freeze entirely. With Post LabVIEW 5 multithreading this setting has both been less important as well as having a lesser bad inpact even for lengthy functions. Since there are many threads LabVIEW is using even a blocking subroutine will not block the entire program (unless you happen to run the subroutine in the UI system). At the same time LabVIEW has made many memory optimization improvements so that the advantage of a VI being sort of inlined does not likely yield a big effect there anymore. What remains is the reduced caller overhead for a subVI. So the thumb of rule would be: Use subroutine only for very small subVIs whose execution speed is rather short and that gets called very very often. Because for a VI that takes 1s to execute shaving of a microsecond in calling overhead is simply useless, but if that subVI itself only consists of a few LabVIEW primitives taking up anything in the order of 1 microsecond to execute, adding another microsecond for the calling overhead will be significant. But even then if you do not call that VI millions of times inside a loop it is not likely to buy you much. Rolf Kalbermatter
  17. QUOTE (mattdl68 @ Apr 1 2008, 07:40 PM) This discription does not sound completely right. I don't remember having had to parse the string for VID and PID. Unfortunately my sources won't help you to much since they are for a specific device. There is however enough code to show you how it needs to be done. Search for usbhidioc on the net. I've used a Visual C 6 version as inspiration. Specifically http://www.lvr.com/hidpage.htm and there halfway down Visual C++ 6 section. You won't get around installing the WinDDK from MS I'm afraid, unless the newest PSDKs come with the necessary definitions too. In that example you find the actual code to search for a HID device in usbhidiocdlg.cpp/CUsbhidiocDlg::FindTheHID() The code in that function in itself is just standard C but the project is in C++. And no I will not even consider to look into the possibility to call these APIS directly from LabVIEW with the Call Lbrary Node. It's very very maybe possible but it will be such a pain that even if you have to install and configure an entire C environment and have to learn C too it will not be less painful and time consuming than getting you to the point where such a direct LabVIEW interface would work reliable. Rolf Kalbermatter
  18. QUOTE (Götz Becker @ Apr 1 2008, 04:26 AM) That doesn't load as project. And just looking at the subVIs itself won't show any leaks for sure. Rolf Kalbermatter
  19. QUOTE (mattdl68 @ Mar 31 2008, 12:47 AM) For HID devices I do not think you can use MAX at all and it doesn't make to much sense either as you would have to implement the HID Class protocol again in LabVIEW using VISA Nodes. HID devices are well known to Windows and it will claim them and VISA won't really be able to hook them if I'm not mistaken. Instead you will need to go the Windows API route as you have started out but that is not for the faint at heart without some fair C programming knowledge. So what device is it you want to access? Because I do not think VISA USB Raw is gonna help and the Windows API is likely at least one league to complicated for you. Even if you manage to access the Windows API for the HID device this will not be your end of the troubles. HID itself is also very basic with just a bytestream for read and write. How this bytestream needs to be formatted (usually binary) will be another problem to tackle and without proper documentation from the manufacturer likely not possible. Doesn't the manufacturer have a DLL to communicate with that device already? That would reduce to problem to interface that DLL and get its documentation. Rolf Kalbermatter
  20. QUOTE (neB @ Mar 31 2008, 08:45 AM) God am I lucky to have disabled that. I have one VPN adapter, several VMWare virtual networks, a Wireless network and a built in 10/100/1000 MB network on my computer. That would probably cause nilm.exe to go completely nuts :thumbdown: Rolf Kalbermatter
  21. QUOTE (Michael_Aivaliotis @ Mar 30 2008, 05:49 AM) Ok I just checked. lmgrd.exe is the actual service that you have just disabled. nilm.exe is also part of the license manager but not exactly sure what it does. All I know is that LabVIEW has it's own copy of the FlexLM license manager integrated and apparently does all the license checking directly itself through that. Not sure why there would be an nilm.exe ever nor what the lmgrd.exe service would be good for other than for volume license situations. Also haven't found any official information from NI about that and as long as it works without I just won't start it up. Rolf Kalbermatter
  22. QUOTE (orko @ Mar 29 2008, 07:54 PM) I'm not really sure but I think for a single dev workstation it is not needed at all. That is of course a different story if you use a volume license or distributed license. Here I would assume the license manager service is required to allow connecting to the license manager that would be running on some central system. But yes NILM does not really do anything on my system it seems. I have it on manual startup since LabVIEW 7.1 came with it and never had problems with activation or whatever. All I noticed is that it creates sometimes temporary license files in the license folder when LabVIEW starts up. But it seems to work and that is the only thing that counts for me. In your typical Windows system there are countless services that do virtually nothing. It would be nice to clean them all out but researching that matter is a work with no end and at the end of the day I need to do some work to justify my salary too. Rolf Kalbermatter
  23. QUOTE (Michael_Aivaliotis @ Mar 29 2008, 04:30 PM) Control Panel->Adminstrative Tools->Services Then in there look for the NI License Manager Service, select it and in the dialog select Stop. And set it's startup type to Manual to prevent it from restarting at the next reboot. Rolf Kalbermatter
  24. QUOTE (Michael_Aivaliotis @ Mar 29 2008, 04:12 AM) It's the NI License Manager process! Are you sure you have no illegal LabVIEW Vis on your computer and the license manager is phoning home?? :ninja: I've shut it completely down since LabVIEW 7.x already and never have noticed that my LabVIEW licenses wouldn't work. Apparently the built in LabVIEW license check has a fall back to read the license files directly from disk. But to know that about 1MB from the labview.exe file is actually this built in license manager kernel, feels a bit well... Rolf Kalbermatter
  25. QUOTE (Tomi Maila @ Mar 28 2008, 07:56 AM) Ok I'm assuming that you use the Call Library Node and call a DLL with exported standard C functions here. ActiveX and .Net DLLs are an entirely different beast and not my real speciality. Although there are two ways to locate a function in a DLL (static linked import library vs. dynamic linking through LoadLibrary/GetProcAddress) the rest of this story is all the same for both, except that dynamic linking allows the linking application (or DLL) to specify an absolute path for loading a DLL and that will fail the load if the DLL is not located there even if it could be found in other default search locations (except I believe if a DLL with the same name is already loaded into the current process image). So when Windows gets a request to load a library (either through LoadLibrary() without absolute path or implicitedly through the import table of a statically linked import table inside an executable or DLL) it will look for that DLL in following places and search order: 1) DLL already mapped into current process with same name 2) The directory where the current process was started from (aka where LabVIEW.EXE or YourApp.exe is located) 3) System Directory (often called System32) 4) Windows Directory (sometimes also called WinNT) 5) Any directory present in the PATH Environment variable for the current user Now LabVIEW does do a little more than just passing the DLL name itself and remembers the actual (relative) path of a DLL inside a VI and first tries to load that DLL with the resulting absolute path. If that fails because the DLL is not there it tries it again with only the DLL name. That is why it can work for the application builder to put support files including directly called DLLs into a data subdirectory inside your application directory. LabVIEW does remember that relative location and will try to load the DLL explicitedly from there first. However there is no way short from scanning the import table of a DLL to find out if it has other dependencies and I'm not aware of any application or programming environment that would go to these lengths ever. So while you could put your directly dependant DLL into a data subfolder inside your application you can not really put dependancies this DLL has into the same folder since here only the Windows standard search order will be used and that would mean it can't find the DLL at all. However by putting all those DLLs (and most probably your directly called DLLs too for the simplicity and clearness of it) into the same folder as your executable you can quite simply ensure that your application will always use the DLLs it was designed for without clashing with other versions of the same DLL. You could try to trick LabVIEW into explicitedly loading those DLLs too from a subdirectory directly by adding a dummy VI to your startup VI that references those DLLs directly without ever really calling them. That way when your own DLL that depends on these DLLs is loaded, Windows will find them already mapped into memory and not look any further. But that can be a bit tricky because you do need to know the exact dependancy tree and start to load the VIs referencing those DLLs in the reverse order of the DLLs in order of their dependancies. Not a big problem if the dependency is as you draw it but it can get tricky if those subDLLs also have dependancies on each other. Personally I have never felt the need to do such tricky stuff but instead just throw all the DLLs into the same directory as the executable. Yes it clutters that directory a little but hey it's quite common to have a few DLLs in the executable directory of nowadays applications. Of course for your development machine or when you would want to distribute the VI library for development to others, things could get a bit more tricky. Since you do have a custom DLL already you could try to make those sub DLLs loaded explicitedly from the same location as your DLL itself is loaded but that would mean that you would have to reference your A.DLL from inside LV_A.DLL completely dynamically and also wrap every direct A.DLL import into your VI library through LV_A.DLL making it call A.DLL dynamically too to ensure that the DLLs are loaded properly already before A.DLL gets loaded and Windows tries to satisfy it's direct imports. Quite a hassle indeed. So here again the most simple approach would be to just drop everything into the LabVIEW root folder (since your LabVIEW.EXE that is starting your LabVIEW development process is located here). Alternatively you could consider telling people to put it in system32 if they want to develop in LabVIEw with yout library but you will have to make sure to document to them how they need to adjust the built process to make sure those DLLs get added as support files to the built and to make sure to put them into the same directory as the executable itself. PS: you are not by any way incorerating something that is using the Apache Runtime Library Support? Although I didn't think that was LGPL. PPS: And no there can't be only one DLL of the same name in memory at once at least technically spoken. The limitation is that only one DLL with the same name can be mapped into a specific process space. A differnt executable and therefore different process space does reference it's own DLL version and if the paths do not match it will be two different DLLs loaded into memory, one for each process. Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.