-
Posts
3,872 -
Joined
-
Last visited
-
Days Won
262
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
how to send sms using this card?
Rolf Kalbermatter replied to reemon's topic in Calling External Code
QUOTE (reemon @ Apr 13 2008, 07:01 AM) VISA is an API not an installer and therefore you do not use VISA to install a modem. Your modem should come with an INF file and better yet a real installer that will tell Windows how to control it as a COM port. Also most GSM cards will usually install as COM port somehow. Once you can see a COM port in the Widnwos device manager for your modem you can start fiddling around with it. A good start is trying to connect to it through Hyperterminal. A simple AT command should give you then an OK response. If it works that far in Hyperterminal your next step is getting to know VISA and starting to send commands to your modem through VISA. Most serial GSM modems will more or less adher to the ETSI command standard that is based on the Hayes AT command set and extended with specific commands for SMS message handling, transparent voice mode and many more things. The crux here is the "will adhere more or less" since each modem manufacturer likes to invent a few modem specific commands that will not work or at least not in the same way on other modems. Rolf Kalbermatter -
Generic IP over Ethernet generator
Rolf Kalbermatter replied to billyt's topic in Remote Control, Monitoring and the Internet
QUOTE (billyt @ Apr 12 2008, 11:12 PM) Well they probably are of those kind who think anything but C programming is to high of a level :-) and assembly programming is the real thing. And they might be right in certain areas. After all you don't use a normal comfort car for off road driving, but on the other hand an old jeep with unsynchronized gear shift really sucks for normal street traffic and lets be fair how much of off road driving do most of us here do? Rolf Kalbermatter -
Optimizing loopp performance
Rolf Kalbermatter replied to rpodsim's topic in Application Design & Architecture
QUOTE (JFM @ Apr 10 2008, 03:33 AM) Yes but what else will be going on in this loop? In the way as in the example without anything else this loop will really put a heavy load on the CPU for nothing. If there is something else going on in that loop that needs to execute as often as possible then yes I would agree that the 0 timout is a good idea, but not because it will offload the CPU in any way by not using events internally but simply because you do not want the loop iteration interval to be limited by the Deque FIFO node. Rolf Kalbermatter -
Optimizing loopp performance
Rolf Kalbermatter replied to rpodsim's topic in Application Design & Architecture
QUOTE (JFM @ Apr 10 2008, 01:00 AM) But it does do so very fast then without any other means of throttling the loop iteration. I'm sure that waiting on an event of data being available would be a lot more performant than checking the contents of the queue many 100 or even 1000 times a second. But if your system hasn't to do anything else this would be a moot point. Rolf Kalbermatter -
This VI does not want to unlock!
Rolf Kalbermatter replied to Giseli Ramos's topic in LabVIEW General
QUOTE (Giseli Ramos @ Apr 9 2008, 06:52 AM) Well NI could theoreticaly help, but for various reasons will only do so in very rare circumstances nowadays, IF and ONLY IF the VI is only corrupted. If the VI is saved without diagram resource however, they can't do anything about that anymore. Something that has been gone can't be recreated. It's like compiling a program and then throwing away your source code. With the difference that there are readily available tools to disassemble a compiled program and at least see the assembly code, while for LabVIEW disassembly is basically a no no. The only people that might still be able to do something here are those guys like from Seagate where you send in your harddrive to recover lost data from. Chances would be dim though, since the rewriting of the file has a high chance to destroy the actual data of the previous VI version on harddisk. Rolf Kalbermatter -
QUOTE (Fubu @ Apr 9 2008, 11:46 PM) There is probably no way around some external code interfacing through the Call Library Node and possibly even wrapping something up in an external C wrapper DLL. Possible pointers could be libusb, an Open source C library originally from Unix to communicate with USB devices and usbhidioc, a C source code example how to access HID devices in Windows. Looking for these two search terms in Google should bring you some good pages. Although not many of them with ready made LabVIEW solutions. Rolf Kalbermatter
-
Pinnacle Movieboard and LabVIEW
Rolf Kalbermatter replied to tmot's topic in Machine Vision and Imaging
QUOTE (tmot @ Apr 7 2008, 10:01 AM) If it has a DirectX (DirectShow) compatible driver you could try to download the IMAQ for USB Webcam driver from the NI site. It is free but unsupported and altough it is for USB webcams, the DirectX API can also be used for video frame grabber cards. Not sure if NI might filter the available acquisition filters to USB specifically but it is at least a try. Failing that I do think going with an NI card would be definitely the fastest solution in terms of time to get this working. Rolf Kalbermatter -
How to call a dll that has an ENUM definition
Rolf Kalbermatter replied to george seifert's topic in Calling External Code
QUOTE (rolfk @ Apr 7 2008, 03:30 AM) There is actually one other aspect here that is important. While C and I do believe C++ will use the smallest integer that can hold the biggest enum value, there is also something called padding. This means skalar elements inside a struct will be aligned to a multiple of the element data size or the data alignment specified through a #pragma statement or passed to the C compiler as parameter, whichever is smaller. So in the case of above enum type which would result in an int8 and following structure struct { enum one_three elm; float something; } "something" will be aligned to a 32 bit boundary with all modern C compilers when using the default alignment (usually 8 bytes). So the C compiler will in fact create a struct containing an 8 bit integer, 3 padding filler bytes and then a 32 bit float. Treating the enum as int32 in that case will be only correct if the memory was first initialized to be all 0 before the (external code) filled in the values and also only on little endian machines (Intel x86). Rolf Kalbermatter -
Calling a dll that returns a buffer full of data
Rolf Kalbermatter replied to george seifert's topic in Calling External Code
QUOTE (george seifert @ Apr 7 2008, 07:53 AM) Yes treating it as array of int32 of double the size should work quite well. You can then typecast that back into an array of your cluster type although you may have to byteswap and word swap the whole array first to correct for endianess issues. Or maybe just swap the bytes and words of the integer part. That is always something best seen in trial and error. Why it seemed to work for smaller arrays is probably because the DLL was in fact writing the first enum valu into the int32 that tells LabVIEW how many elements are in the array. As such you should have seen a swapping of the float and enum in comparison to what the VB code would indicate. With smaller arrays the overwriting did not cause to bad problems but with longer arrays it somehow set of a trigger. Rolf Kalbermatter -
QUOTE (PaulG. @ Apr 3 2008, 10:50 AM) No no! But it helps to unload everything you learned for C programming when starting with LabVIEW. The only thing worse to learn LabVIEW are Basic programmers. I for one started with Pascal, then learned LabVIEW and found it a God send, and after that only learned C. And there are simply areas where C is more appropriate than LabVIEW. But I would never code an UI in anything but LabVIEW. Rolf Kalbermaltter
-
QUOTE (orko @ Apr 4 2008, 03:48 PM) Don't know that brand but yes come on with it :beer: Rolf Kalbermatter
-
How to call a dll that has an ENUM definition
Rolf Kalbermatter replied to george seifert's topic in Calling External Code
QUOTE (Aristos Queue @ Apr 4 2008, 02:38 PM) Actually Standard C uses normally the smallest integer that can contain the highest valued enum. Maybe C++ changed that in favor for the int dataype. So typedef enum { zero, one, two, three }; will usually be an int8 To force a specific int size one often defines a dummy value: typedef enum { zero, one, two, three, maxsize = 66000 }; will make sure it is an int32 Rolf Kalbermatter -
Calling a dll that returns a buffer full of data
Rolf Kalbermatter replied to george seifert's topic in Calling External Code
QUOTE (george seifert @ Apr 4 2008, 11:08 AM) I doubt very highly that your DLL is understanding LabVIEW datatypes. That is however what it is going to see if you use Adapt to Type. With that you tell LabVIEW to pass its array just as it is in memory, which will be a LabVIEW data handle and not an array data pointer. Since it is an array of struct there is no trivial way to make LabVIEW pass it as a pointer. You will have to typecast the cluster array to a byte array (selecting little endian) then pass it as an array data pointer and on return decode the bytestream. There is really no other way other than writing a wrapper DLL in C doing that translation for you in C. Rolf Kalbermatter -
QUOTE (orko @ Apr 4 2008, 03:34 PM) No, no!!! There are so many delicious cookies! Rolf Kalbermatter
-
QUOTE (TobyD @ Apr 4 2008, 02:26 PM) It is a long shot :-). I think the problem might be more related to the fact that he is using LV 7.1 or lower according to his list and that DS had some issues about closing sessions properly somehow in earlier days. My memory is all fuzzy about this and it could also have been something in the DS connection of front panel controls and not sure if it was LabVIEW 6.x, 7.0 or 7.1 but there were definitly some issues. However that's so long ago I couldn't remember the details anymore, especially since I never used DS myself. Maybe also check the error cluster too. It could be that DS Read returns an error despite returning data and that DS Close does not close then which would be a bug, but it has happened in the past that some Close functions didn't execute if the error input was set to indicate an error. Rolf Kalbermatter
-
QUOTE (tcplomp @ Apr 2 2008, 02:22 PM) Yes but it's a clutch and the symptoms clearly point to a connection refnum not explicitedly closed. Unloading the VI or LabVIEW altogether will close that connection, but closing it yourself explicitedly is definitly the right course of action. Rolf Kalbermatter
-
QUOTE (Justin Goeres @ Apr 2 2008, 05:47 PM) You are able to configure LabVIEW to use a different user.lib path and upcomgin versions of LabVIEW while not going to do away with the standard LabVIEW internal user.lib will likely add another user.lib in your user profile directory and those two will be merged on startup. Rolf Kalbermatter
-
When do you use Subroutine priority?
Rolf Kalbermatter replied to Jim Kring's topic in Application Design & Architecture
QUOTE (Jim Kring @ Apr 2 2008, 02:57 PM) I think your reasoning is way to general. There are functions that might benefit reasonably well from subroutine priority but many others that will see little benefit in typical applications. The first could be some of those little helper functions such as Valid Path. The latter would be for instance things like Delete Recursive and such. It's all about is it a function that takes always little time to execute and is likely to be called inside loops many many times. If not the advantage of a little faster execution is IMHO not at par with the disadvantage of causing possible problems that might be also hard to debug since debugging of subroutine VIs is not possible at all. In general speed of an application is not lost in the calling overhead of VIs at all but in the type of algorithme used and even if calling overhead of VIs can add some significant performance loss it is probably about much less than 5% of the VIs that can significantly add to the performance by reducing their calling overhead. Penalizing the other 95 to 99% of the VIs for that is not a good option for me. Rolf Kalbermatter -
When do you use Subroutine priority?
Rolf Kalbermatter replied to Jim Kring's topic in Application Design & Architecture
QUOTE (pallen @ Apr 2 2008, 10:39 AM) Yes if you run the application hours and hours they will eventually be called million of times but with very very often I meant millions of times in short time (seconds). Anything in the context of UI should not be optimized in terms of microseconds but rather in the way it is done (more optimal algorithme to prepare data, avoid huge memory copies, defer panel updates during bulk UI updates, etc). Rolf Kalbermatter -
When do you use Subroutine priority?
Rolf Kalbermatter replied to Jim Kring's topic in Application Design & Architecture
QUOTE (pallen @ Apr 2 2008, 08:25 AM) I don't think they qualify for the "very very often called" item. At least not as I design them. If there is an operation that would require that functional global being called millions of times inside a loop I usually create a new method that does that particular operation inside that functional global instead. That should take care of that. Rolf Kalbermatter -
When do you use Subroutine priority?
Rolf Kalbermatter replied to Jim Kring's topic in Application Design & Architecture
QUOTE (Jim Kring @ Apr 1 2008, 04:29 PM) Well it's all relative. Now with LabVIEW being always multithreading even inside a single execution system the negative effect of subroutine VIs is not as dramatic as it used to be in old single thread LabVIEW days. At that time subroutine priority was specifically reserved for small VIs that could be relatively quickly executed. LabVIEW optimized the calling context in such a way that there was very little overhead in calling a subVI similar to if the VIs diagram would have been directly embedded in the caller. This could lead to rather huge speed improvements because the calling overhead and the chance for memory copies could be greatly reduced. At the same time while a subroutine was busy NOTHING else in LabVIEW could be going on, since that subroutine exclusively blocked the one single thread LabVIEW had at that time. So if you did that to a lengthy function, LabVIEW could seemingly freeze entirely. With Post LabVIEW 5 multithreading this setting has both been less important as well as having a lesser bad inpact even for lengthy functions. Since there are many threads LabVIEW is using even a blocking subroutine will not block the entire program (unless you happen to run the subroutine in the UI system). At the same time LabVIEW has made many memory optimization improvements so that the advantage of a VI being sort of inlined does not likely yield a big effect there anymore. What remains is the reduced caller overhead for a subVI. So the thumb of rule would be: Use subroutine only for very small subVIs whose execution speed is rather short and that gets called very very often. Because for a VI that takes 1s to execute shaving of a microsecond in calling overhead is simply useless, but if that subVI itself only consists of a few LabVIEW primitives taking up anything in the order of 1 microsecond to execute, adding another microsecond for the calling overhead will be significant. But even then if you do not call that VI millions of times inside a loop it is not likely to buy you much. Rolf Kalbermatter -
QUOTE (mattdl68 @ Apr 1 2008, 07:40 PM) This discription does not sound completely right. I don't remember having had to parse the string for VID and PID. Unfortunately my sources won't help you to much since they are for a specific device. There is however enough code to show you how it needs to be done. Search for usbhidioc on the net. I've used a Visual C 6 version as inspiration. Specifically http://www.lvr.com/hidpage.htm and there halfway down Visual C++ 6 section. You won't get around installing the WinDDK from MS I'm afraid, unless the newest PSDKs come with the necessary definitions too. In that example you find the actual code to search for a HID device in usbhidiocdlg.cpp/CUsbhidiocDlg::FindTheHID() The code in that function in itself is just standard C but the project is in C++. And no I will not even consider to look into the possibility to call these APIS directly from LabVIEW with the Call Lbrary Node. It's very very maybe possible but it will be such a pain that even if you have to install and configure an entire C environment and have to learn C too it will not be less painful and time consuming than getting you to the point where such a direct LabVIEW interface would work reliable. Rolf Kalbermatter
-
When does the MemoryManager release memory?
Rolf Kalbermatter replied to Götz Becker's topic in LabVIEW General
QUOTE (Götz Becker @ Apr 1 2008, 04:26 AM) That doesn't load as project. And just looking at the subVIs itself won't show any leaks for sure. Rolf Kalbermatter -
QUOTE (mattdl68 @ Mar 31 2008, 12:47 AM) For HID devices I do not think you can use MAX at all and it doesn't make to much sense either as you would have to implement the HID Class protocol again in LabVIEW using VISA Nodes. HID devices are well known to Windows and it will claim them and VISA won't really be able to hook them if I'm not mistaken. Instead you will need to go the Windows API route as you have started out but that is not for the faint at heart without some fair C programming knowledge. So what device is it you want to access? Because I do not think VISA USB Raw is gonna help and the Windows API is likely at least one league to complicated for you. Even if you manage to access the Windows API for the HID device this will not be your end of the troubles. HID itself is also very basic with just a bytestream for read and write. How this bytestream needs to be formatted (usually binary) will be another problem to tackle and without proper documentation from the manufacturer likely not possible. Doesn't the manufacturer have a DLL to communicate with that device already? That would reduce to problem to interface that DLL and get its documentation. Rolf Kalbermatter
-
QUOTE (neB @ Mar 31 2008, 08:45 AM) God am I lucky to have disabled that. I have one VPN adapter, several VMWare virtual networks, a Wireless network and a built in 10/100/1000 MB network on my computer. That would probably cause nilm.exe to go completely nuts :thumbdown: Rolf Kalbermatter