Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,918
  • Joined

  • Last visited

  • Days Won

    271

Everything posted by Rolf Kalbermatter

  1. Modems usually don't really have any tone detection built in. They don't need that as the tone detection is done by the exchange system from the network provider and that system routes the connection based on this information to the other side (in this case also a modem). Once the modem picks up the line the dialing from the remote side has already been finished. NI had an example for a touch tone detector using their DAQ cards and a LabVIEW program. As for detectiong the ring signal, that is an option you have to enable in the modem by sending it some command. Another command will enable auto pickup by the modem on the ring signal and you can usually set the numbert of ring tones the modem should wait before automatically picking up the line. What command to use you can find in the documentation for your modem. The modem actually will activate the RI line on the serial port and using VISA you can regularly poll that signal to see if the ring signal has been detected too. When set to auto pickup, the modem will attempt to establish a connection with the remote side (if that is also a modem) and inform you of this by sending strings over the serial line. Rolf Kalbermatter
  2. I'm afraid you don't. And thinking about it I can see why. How do you suppose should you define the different possible events you could set? Of course you could do it with strings but that is very error prone and hard to implement in the scripting engine too, as it would need to implement a state dependant (depending on the actual VI the names of the controls will change) parser and then some more. It is likely that there will be something of some sort in a futuure version but at the moment it would seem you can't do that. Rolf Kalbermatter
  3. As far as I can say, these functions do absolutely nothing in current LabVIEW versions. I'm not sure if they are remainders of the old style memory manager in Windows 3.1 or if they were plans to add functionality which never made it into the code. Basically many of the memory management ideas in LabVIEW are not so interesting with modern OSes but were absolutely mandatory when LabVIEW was running on old MacOS and 32bit Windows DOS extenders for 16bit Windows 3.1. While some of this got removed later on as support for those platforms was dropped the fundamental architecture couldn't be changed withoug breaking lots and lots of already existing applications. Rolf Kalbermatter
  4. This won't work. Some of the functionality needed for this is not available in LabVIEW 6.1. More correctly it is there but returns an "unimplemented" error code. Rolf Kalbermatter
  5. 1) I don't think LabVIEW has a hard 1GB limit. But it has its own memory manager layer above the OS memory functions and works with two so called memory zones from which it allocates memory. The DS (data space) zone is the memory LabVIEW uses for all diagram data and the AZ (application zone) is used for internal structures and variables. What is most probably the problem here is that the available memory is split into these two zones at startup and when you try to create your array LabVIEWs memory manager can't find a big enough block of free continous memory in the DS heap for this array. 2) There is probably not much you can do to influence the way LabVIEW manages its internal memory allocation. 3) This is a possibilty but you can't allocate memory in a DLL using standard OS memory allocaters and hand that memory to LabVIEW to work on it as if it would be its own data. LabVIEW can only work with memory allocated through its DSNewPtr/Handle functions. What you could do though is to not only implement the allocation and deallocation in your external DLL but also some accessor fucntions. Still what you want to do is basically at the outmost limit of what Windows can allow any application to do, and that assumes a machine which is not doing anything else, does not have hundreds of services running, no background tasks, a lean OS as much as possible (no unnecessary drivers and such) and last but not least a very detailed control of your memory, something LabVIEW is not really giving you easily. Even if your application would be entirely written in C and you would be an expert in memory handling you would basically scratch the absolute limits of 32bit Windows. Under 32bit Linux you would have a little more leeway as there you can configure the kernel to allow an application to have control over up to 3GB memory, but not every application can handle that (Those that assume 32bit integers for their address offsets instead of unsigned integers and I have reasons to believe that LabVIEW might trip over this too). I do think that the 1.8 GB limit you see in your C application is actually not limited by the OS itself. The OS is mapped into the the upper 2GB address space for an application, if I remember correctly. But your C application also needs some space for management and such of itself. Rolf Kalbermatter
  6. This will create trouble no matter what. An USB device without vendor ID and product ID can't be enumerated by the USB subsystem and consequently won't be visible at all. There is really no way you can trick LabVIEW or other software into seeing such a device without completely writing a kernel device driver to replace the OS provided USB handling. And writing kernel device drivers is a task you for sure don't want to get into. Try to read into the USB spec and what is necessary on your embedded controller to properly implement a basic USB handling. Most embedded controllers with built in USB port come with example source code how to implement some kind of proper USB device type. For not to fast communication the emulation of a HID (Human Interface Device) interface will be usually the simplest method, since you won't need to implement any device driver on the OS side. For faster communication you may need to resolve to a raw USB data stream device and in that case you either need to write a device driver on the computer side or use VISA control to create an according VISA device interface. For raw devices it won't be trivial as there are much less source code examples for embedded controller firmware to do this, as well as the need to fiddle on the computer side with the device interface programming. For VISA there are a few interesting application and technical notes on www.ni.com, how to go about this. Rolf Kalbermatter
  7. The newest LVZIP package on OpenG, which is to be released shortly does contain the CRC32 algorithme and makes it available as a user accessible function. It's implementation is in the underlying shared library and is used to calculate the CRC for the generated ZIP files. Currently it already works for Windows and Linux86 and we are waiting for a compilation of the shared library for Mac OS X, which should be done shortly, at which time the package will be officially released. LabVIEW for Windows and Linux users familiar with sourceforge can get the current CVS version from LVZIP @ sourceforge in the "source"directory to check it out. Rolf Kalbermatter
  8. Just as Michael said, if it is an USB memory stick Windows will already install a default driver for it to make it appear as an additional drive. VISA can only access USB devices for which no other driver has been installed yet. If any other driver claims a device already, VISA backs off and rightly so, as accessing hardware from two different drivers at the same time is looking for big trouble. Rolf Kalbermatter
  9. Oops this was meant to be a reply to the previous message. Rolf Kalbermatter
  10. That would be a little strange it seems. Also you should consider that Hyperterminal actually adds a carriage return/line feed automatically to every line (after all you pressed the return key to tell it to send the string and the return key at least under DOS/Windows is equivalent to carriage return+line feed). Rolf Kalbermatter
  11. Basically LabVIEW can handle 100 of loops in parallel, and does this with an amazing grace. The only thing you have to watch out is making sure that they are not free running, meaning that in each of them there is at some point an asynchronous function to limit its speed to that necessary for the task. Asynchronous nodes can be a number of different ones, the obvious ones being "Wait ms", "Wait until next multiple ms", and "Wait on Occurrence" but also the event structure itself. Also VISA or TCP functions and other ones with timeout input can be seen as asynchronous too in most cases, sometimes it is an option you need to enable on them (VISA). The only reason not to use to many loops is that you need to somehow manage them in your application. They need to be started at some point, maybe you need to have some synchronization at certain points, although they run completely asynchronous for the rest. At last but not least they all need to be told to stop gracefully somehow when the user decides to close the application. This adds overhead to your programming and also makes the application usually more difficult to understand, and with that in most cases also somewhat (and sometimes a lot) more difficult to debug. An architecture I have found to be very powerful for multi loop applications is to have each use its own queue as command input. This queue is polled inside the loop and decides the next step to execute in its case structure, really resembling a normal state machine. With some utility VIs you write, you can now send from anywhere in your application specific commands to a specific loop/state machine. You need to be careful however to design the loops and their functionality in advance and remember to adhere to this design at all times. Once you start to mix functionality in between loops in the heat of your development work, you really can end up with an application even you can't understand yourself anymore, not to talk about debugging and maintaining it later on, and even worse having someone else have to debug it! Rolf Kalbermatter
  12. And it clutters those popup menus with so many items that normal working in LabVIEW with them is almost impossible. Rolf Kalbermatter
  13. You seem to think we have millions of years at our hands ;-). Honestly, just do an ASCII string search on the LabVIEW executable. Unix has nice tools for that such as grep! Rolf Kalbermatter
  14. It is nice to look at what you get by this if you have a lot of time at your hands! I haven't yet really found many reasons to actually use it, especially because use of this for production apps might be not such a good idea. As it is all about undocumented stuff really, NI is free to change this functionality at any time, by changing data types, behaviour, or removing the functionality for whatever reason and it won't be mentioned in the upgrade notes at all, so you can end up with some bad surprises when you need to upgrade your app to the next LabVIEW version. Rolf Kalbermatter
  15. No, it isn't from a VI without password. I created it myself. Is that legit? Me thinks so!
  16. If the C code has Endianess problems in itself I wouldn't trust it at all. It would indicate that the code was developed rather in trial and error methods, than by clearly understanding what the algorithme should do. Rolf Kalbermatter
  17. Possibly one more CRC algorithme is in the lvzip package in the OpenG Toolkit. It is used to calculate a 16 bit CCITT CRC for the implementation of the Mac Binary format. Not sure about the correctness in terms of CRC theory of this but it seems to do, what the Mac Binary format requires, whatever that is. Other CRC algorithmes might be found on the Info-LabVIEW archives http://www.info-labview.org/the-archives Rolf Kalbermatter
  18. Well, display of a number in a miriad of formats is that, display only. The number itself does not change in memory at all. So what can you do? 1) Instead of displaying the byte array as a string array, configured to show hex format, you could directly display the byte array, click on the numeric in the array control and select "Visible Items->Radix" from the pop-up menu. Then click on the d that appears and select the numeric format you want to see. This will change how you see the number in the control but will do nothing to the numeric value itself. 2) Wire the byte array into a for loop with autoindexing enabled and use the apropriate To String formating function, either the Format into String with %d, or %x as formating specifier or one of the String Conversion functions such as Numbre To Decimal/Hexadecimal/Octal String. Rolf Kalbermatter
  19. I think they need a little more fine tuning, at least if NI doesn't drop a few platforms before 8.0. For instance Unix alone is a little bit a broad selector. Not everything which might work on Linux might be portable to Solaris, for one example. And the attempt to load the DLL on a Mac and similar issues should also be eliminated. Rolf Kalbermatter
  20. Your question is very unclear. LabVIEW itself is written in standard C and most new functionality since LabVIEW 6.0 has been written in C++. Other than that certain paradigmas are similar to how they are in C, there is no direct relation between the C programming in which LabVIEW is developed and the LabVIEW programming language you are using as a LabVIEW user. If you refer to the scripting features which are not yet officially released but discussed quite a lot here, that is not a language in itself and the term scripting is IMO rather misleading here. It is an interface exposed through VI server which gives the user access to the internal LabVIEW object hierachy. As such it gives a possible user quite some possibilities but the LabVIEW object hierarchy is very involved and nested and programming through this "scripting"interface gets very fast messy and involved. This is probably one of the main reasons the scripting feature hasn't been released to the public (and one of the first complains of most people trying to get into that scripting). Rolf Kalbermatter
  21. This should help. Rolf Kalbermatter Download File:post-349-1109367188.vi
  22. With LabVIEW 7.0 this is basically no problem. The functions to deal with .ico files are available in LabVIEW since about 6.0. Checkout vi.lib/platform/icon.llb. That are the same functions used by the application builder to read ico files as well as replace icon resources in the build executable. In LabVIEW 7.0 you also have a VI server method to retrieve the icon of a VI. Together these two things are all which are needed. There are however a few fundamental problems. The function to replace icon resource data works directly on the executable image (well really on the lvappl.lib file, which is an executable stub which is prepended to the runtime VI library and which locates the correct runtime system and hands the top level VI in that library to the runtime system). As such it can only replace already existing icon resources as doing otherwise would require relocating the resource table and its pointers, an operation which is very involved and error prone. Windows itself doesn't have documented API functions to store resources into an executable image, as this is a functionality not considered necessary for normal applications. lvapp.lib contains only icons for 16 color and 2 color icons for the size 16*16 and 32*32. Wanting to be able to have other icons would mean to add first those resolutions and sizes to lvapp.lib and improving the icon functions in icon.llb to properly deal with those extra resolutions. This is not really difficult to do. A different problem is that LabVIEW icons are always 32*32 pixels whereas Windows really needs 16 *16 pixel icons too, for displaying in the left top corner of each application window as well as in detail view. Rolf Kalbermatter
  23. Excluding very old LabVIEW versions, you can assume that the first 16 bytes of a VI are always the same. In fact any LabVIEW resource file has the same 16 byte structure with 4 out of those 16 bytes identifying the type of file. 52 53 52 43 RSRC 0D 0A 00 03 <version number> ; this value since about LabVIEW 3 4C 56 49 4E LVIN or LVCC/LVAR 4C 42 56 57 LBVW Anybody recognizing some resemplance with the Macintosh file type resource here ;-)
  24. Or you could simply specify a range. Works for strings too! "HELLO ".."HELLO!"
  25. Callbacks in LabVIEW itself are, although possible since 7.0, indeed an anachronisme. But there are situations where it would probably make sense to use them. Callbacks in low level drivers are an entirely different issue. They are one way to allow an application to use asynchronous operations in a way without having to rely on interrupts or such things, which in modern OSes are out of reach of user applications anyhow. For cooperative multitasking systems this is basically the only way to do asynchronous operations without directly using interrupts or loading the CPU with lots of polling. Another possibility to handle asynchronous operations on multtasking/multithreading systems is to use events. LabVIEW occurrences are in fact just that. Eventhough LabVIEW wasn't a from begin on a real multithreading system, for the purpose of its internal diagram scheduling it came as close as it could get to real multithreading. Asynchronous operations are indeed inherently more difficult to understand and handle correctly in most cases. Especially in LabVIEWs dataflow world they seem to sometimes mess up the clear and proper architecture of a dataflow driven system. But they can make a big difference between a slow and sluggish execution where each operation has to wait for the previous to finish and a fast system where multiple things seem to happen simultanously while a driver waits for data to arrive. With the more an more inherent real multithreading in LabVIEW this has become less important but in my view it is a very good and efficient idea to use asynchronous operations of low level drivers if they are avaialalbe. They way I usually end up doing that in the past is translating the low level callback or system event into a LabVIEW occurrence in the intermediate CIN or shared library. Of course such VI drivers are not always very simple to use and synchronous operations should be provided for the not so demanding average user. They can even be based on the low level asynchronous interface functions if done right. But as long as you stay on LabVIEW diagram level only, callback approaches seem to me in most cases a complication of the design which is not necessary. As you have properly pointed out, having a separate loop in a LabVIEW diagram handling such long lasting operations is almost always enough. That is not to say that Jims solution is bad. He is in fact using this feature not strictly as a callback but more like a startup of seperate deamons for the multiple instances of the same task, a technique very common in the Unix world. In that respect it is a very neat solution of a problem not easily solvable in other ways in LabVIEW.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.