Jump to content

aledain

Members
  • Posts

    113
  • Joined

  • Last visited

    Never

Everything posted by aledain

  1. The first thing to do is to check that the cable connections are correct. The intrument may require a null modem cable (where the transmit and receive pins (2,3) are reversed) or it might (rarley) do the crossover for you and require a straight through cable instead. Next check the pinout for the cable matches that for the instrument, especially if the instrument is D25 type connector, say and your PC com port is D9 type of connector. The next things to try are changing the baud rate, parity, stop bits, etc to match that specified in the instrument documentation. Then start playing with the comms handshaking. Again, this should be specified in the instrument's documentation. If your instrument still doesn't communicate perhaps you need to enable communications from within the instrument's setup menu. Note that most LV examples use simple defaults for the serial comms setups (ie 9600, 8, N , 1, with no handshaking [9600=baudrate,8=databits,N=noparity,1=stopbits]) and so need tweaking to match the instrument. Note, it is sometimes easier to play with these settings with a program such as hyperterminal until you work out the correct parameters. Alternatively try the Serial Driver from ICON Technologies Serial communications are broken up into several different types depending on the instrument. From a programmers point of view and in most cases, instruments are either a transmit receive type or a receive only type. (A third case is a full OSI model of ACK and NACK with error handling. Thankfully most instruments don't use this!) - Transmit Receive: These instruments require sending of a command, waiting some time and receiving a response. For example you might have to send FREQ?\r\n to the instrument and it would respond with 2.345HZ\r\n. - Receive Only: At intervals (either timed or in response to some stimulus at the instrument) data is spat down the serial line to the PC. For example, every second 2.345HZ\r\n arrives on the serial line. The next most important aspect is the termination character. Most instruments use the \r\n pair of characters to indicate that the transmission has ended. So in the serial reader you would specify to read until these characters are matched (and you need to set a timeout long enough for the instrument to respond). Under VISA you can specify \n as the terminating character and this will work for the majority of applications (ie those that transmit in ASCII). By now you should be seeing something in your LV program (or hyperterminal). Lastly you need to consider how the data is formatted. Many instruments communicate in ASCII (eg 2.345HZ\r\n) but others send their data as a stream of bytes (ie 00 01 02 03 04) all of which are unreadable and need to be decoded before the number can be returned. Encoding (transmit/receive type comms) and decoding information will be detailed in the intrument manual. As a last few examples, I could send the same number "5" either in ASCII (readable) form "5" or as the byte value of 05 (unreadable as in x05) and I could send the letter "A" either in ASCII form "A" or as the byte equivalent x41. Confused? Don't rule out that someone has not already written a driver for the instrument you want to talk to. Check out the FAQ for a list of places to look for instrument drivers. cheers, Alex.
  2. Doesn't turning off Y scale autoscaling work? You could then programatically set the top and bottom shown plot. N.B. I haven't tried it but IMHO it should work this way. cherrs, Alex
  3. But you could combine all your controls into a single cluster, change the code to a subvi with a cluster reference into it. In your application create several copies of the cluster and wire the cluster references to copies (as required) of the subvi. You can also modify/edit controls (use the spanner) quite extensively including removing parts or hiding bits of them (and including pictures) so perhaps a normal control could be modified too. For a pie chart might a guage work? cheers, Alex.
  4. And a quick google for "software easter eggs" reveals whole sites devoted to them! Didn't realise they were so prevalent. The first one I saw was in October 1995 where under Win3.1 the icon changed on Halloween. You're right about trust. I think I will notify them of my intent to add the egg with their permission. Anyone know of any eggs in LabVIEW? cheers, Alex.
  5. Hardly rambling, but a very instructive reply. I think that you're right about the possibility of damage being minimal, and that the advantage of the "fun" aspect should not be ignored. I agree too with your comment on the lack of a backdoor == loss of productivity. FWIW I have placed TRACE=TRUE as a possible configuration option in all(most?) of my applications for just such scenarios. Wading through a trace file can be more instructive than receiving a "it won't work" call ;-) I suppose the backdoor should/can be treated differently than the easter egg because the backdoor can be used both maliciously and non, while the easter egg does tend/IS benign. It probably comes down to the client/bosses and their response when they one day locate the egg ... Will they be happy? (maybe) Will they be vengeful? (probably not). An illustrative example of how eggs(?) can catch you out in a good way: I was working on a long project (several years) as the sole programmer and one day while programming onsite with the client looking over my shoulder I made a comment that a particular part of the code, if ever executed, would result in a zombie process being created in the system that just couldn't be deleted. I told him I had nailed (or so I thought) every possible check but just in case I will put put a dialog box in the very last case. He completely forgot about it, the software was commissioned and running successfully for over a year. One day he rings me up quoting the dialog box "I am a zombie, you should never see me". Apparently, the IT people had beavered away for a few hours checking viruses etc in response to this message before calling him and he activating me. I thought it was funny, he found it amusing, their IT department ... well they're an IT department But the upshot was that one of their users had fiddled with the system files to try and do something fancy and my egg had finally hatched. I was able to locate the problem quickly by tracking back the LV code to tell them which file was "wrong". An egg saved the day! cheers, Alex.
  6. No haven't read that, but have read some of his others. I'll now have to go out and buy it just to understand that cryptic comment ;-) cheers, Alex.
  7. I have the desire to place an easter egg (*) in an application I am working on. Should I inform management that the egg will go in? Can/should I do this and what are the ramifications (legal, moral, ethical, etc). Does this fall into the category of malicious code (even if the code is benign). And what about back doors. Has anyone placed one in software they have released? Hmm, much to ponder. cheers, Alex. * An easter egg is a small amount of code added as a bit of fun by a programmer. Generally they can be a bit of fun or a bit of a time waster, and they are NEVER malicious, hence the cute name. Examples I have seen are (1) on Halloween ,the icon changes to a grinning Jack'o'Lantern, (2) an oscilloscope where if you hold down several keys at once it will start a game of TETRIS.
  8. I have a need to make a VI extremenly thin (a special dialog). When I try and resize the VI in the edit environment it does not go thinner than a certain width. Do I need to do this programmatically or is there a way to remove this limitation? 6.02 code only please :headbang:
  9. Couldn't you create a date string using this instead of needing to replace the ":" ...
  10. Yes ... and no. They are less efficent, but for the efficiency reasons you list, they're efficient. The real trap (IMHO) is that they disrupt dataflow and as your program expands globals can (and more often so with inexperienced LV programmers), lead to race conditions. Once you have a complex program, tracking race conditions becomes difficult, tiresome, wearying, frustrating, etc, etc because when the pressure is on, locating a race condition that is there in the built exe but never occurs under the development environment, could send you mental. This is exacerbated when more than one developer is involved in a project. This really arises from the parallel nature of LV. It is impossible to predict which of two nodes will execute and in what order simply by looking at a LV diagram UNLESS there is data flow linking the nodes. A nasty (destructive) race condition can occur when a global is written to (by one part of the program) BETWEEN the read and write of a global pair. A less benign race condition might be that a READ occurs before the global contains the correct value as a result of a previous WRITE. IOW global READ WRITES are not atomic (you can make them appear so, but then the coding efficiency decreases dramatically). Unlike some other 4GL tools, LV execution is NOT left to right OR up to down, but many people mistakingly believe this because they observe (in isolation) this ordering process. It is my understanding that even turning on the GLOBE does not show you the order of operations that true running VI will follow (dataflow aside).
  11. On linked constants, I think there are certain instances where the constants created from a strict typdef are not linked (might be cut and paste, might be older versions of LV too). I always check by RMB on the constant to see that "Update" is selected.
  12. Create a vi and save to your user.lib directory. From LV choose to edit palettes (under >6, pin the palette, chose the right most button and then select the "EditPalettes" button) and navigate to the User Library palette. RMB and choose Insert to have your VI appear in the palette. While you are there, select "MergeVI" as an option. Save your palettes and back in LV when you drag your new user.lib VI onto the diagram it only adds the code and front panel from your user.lib VI as a snippet, NOT as a subvi. This is a "merged" vi code snippet. Neat ;-)
  13. It changes the description for that instance of the while loop etc, not the default for all while loops. Note that you can also display a label (just like on a front panel) using the RightMouseButton (RMB) click, for structures and functions (except wires). P.S. You could probably create a default description for a while loop and add this "new" while loop as a code snippet to your user.lib, but that's probably not what you're after. cheers, Alex.
  14. You can set help (RMB, Description and Tip) for anything, including wires and case structures etc, then turn on flyover help (Ctrl-H) to see your comments.
  15. Wow! LabVIEW is not linear at all!! The reason that the buttons do not respond is that they are in the same while loop as your communications. (LabVIEW reads the state of the buttons just once per loop iteration, turn the globe on to see what I mean), so the buttons don't get re-read until the next iteration of the while loop. Move your communications into a separate while loop and then your GUI will respond in a more timely fashion. However, this does not solve your problem really, in that once the communication starts, your program needs to be able to interrupt it, so you'll need to do some design work on the comms (e.g. have a flag that is checked routinely at each stage of the communication) to allow the user to push a 'cancel' button and your comms will then abort back to a known state.
  16. Far far far simpler than that, use a separate while loop for your dialog box and a separate loop for your serial port stuff. This is classic LabVIEW and one of the powers of LabVIEW over other languages. use globals (as a starting point) to transfer data between your loops. More advanced mechanisms for this transfer (queues, notifiers, semaphores, etc) are available but not necessary. BTW, this is not multi-threading, so since both while loops are running in the same loop. So I like to refer to this as multi-fibred. LabVIEW is multi-threaded but for 99% of the world's programs you can get away with just doing the fibre thing.
  17. OK you need to look hard at what he's doing here. I think he's probably got a wait until next multiple timer timing this loop. You really need to look at buffered image acquitision. IOW, get the card acquiring using it's own clock, and every now and again pull all the images abck from it, selecting the appropriate one) ie if you acquire a frame evey 20ms, take the 5th one. I'd definitely change the WaitUntilNextMultiple (looks like a metronone, WUNM) to the Wait (looks like a watch). The difference here is subtle. If the WUNM is set to 1000 and the CPU is full throttle, then you might see a result at 2000 or 3000 or 4000 ms, it can miss iterations! Too true. However, good programmers (and some of them are scientists) adopt good programming styles and follow standard development methodologies including OO. There are books out there that you might recommend to your scientists about getting on top of it all. The problem with LV is that it is too easy, and most apps are built from the ground up without any design. Doens't mean I haven't seen sphagetti in text based code either. This will be your problem then I just don't think this is right. However I would instead redesign your app to send (control message) and receive (status message) a datapacket via datasocket or tcpip to get round this. You can programatically control the datasocket send/receive and use different channels (sockets) to do this very efficiently. Hmm wrong platform right there. Look at implementing the critical stuff in LabVIEW-RT. However, make sure that parts of the application (ie VI's) haven't been set time critical. This would also give you the sort of results you are seeing because of thread blocking. Should be no problem at all. So I suspect program design. You can always come here for help. It takes a while to "get" LabVIEW but once you do you'll see that for hardware interfacing, LV is the best language there is (provided you wrestle it away from the scientists that is). BTW, if you want to chat online my Messenger address is labview_australia at the usual microsoft moniker. cheers, Alex.
  18. Didn't realise remote panels did this. An alternative would be to use something like VNC. However, ... Hmmm, error prone? I think we need some more information about what you're trying to do in LabVIEW itself. The CPU usage should be close to zero (or ~10% say) with any normal LV application, especially one written using events. So this may mean you're not doing the LV part as efficiently as you can. Are you doing DAQ, some sort of PID or something? Remember LV was written for clunky 486 machines (my LV irrigation system at home runs on one!). Another first time mistake is to not put a wait in your main application loop. Even a wait timer with 0ms is better than a loop with no timer. The wait is importantbecause ... Can you tell me why you need more processing speed? NI can be a little unhelpful with respect to solving problems, but that's why great groups such as LAVA and other NI Alliance members exist But native LV could do this as well, open the tcpip connection, wait for the "event" to arrive on the connection ID and then fire a user event. Alternatively, load the tcpip event into a queue and have the queue handled in another parallel process. N.B. you could put KILL into the queue to get it to shutdown the parallel process and have the application using 0% cpu (with a timeout of -1ms) as input. When you're vent arrives, your application would awaken and do its stuff. BTW if the lan event is sent via UDP, it is even easier as you can listen for the UDP packet directly without opening a connection to the "server" event sender! I don't think this is necessarily the case, there is a WAIT_ON_AXEVENT VI that can wait for any ActiveX interrupt event. From memory, you need to register the event first, and then watch for it. Since you can set a timeout on the WAIT_ON_AXEVENT, the while loop is really only needed if you want to catch that event more than once. ie a re-arm. So how fast do the events come in and how quickly must you respond to them? cheers, Alex.
  19. There is a ping VI in the NI knowledge base somewhere. I also have one contributed by a generous sole. However, can a small app be written for each of the key machines that sends a message every N seconds. If your monitor PC does not receive that message every N + a little bit, seconds, then the PC is dead or off the network. Sort of like a watchdog. You could use TCPIP or datasocket for a quicker implementation. As for the message, I'd get it to send it's IP address.
  20. To send strings back: on your server just do a write operation after the read of the client message ... Client : WriteMsg, ReadMsg(with timeout 60 sec) Server: ReadMsg(with timeout 60 sec), writemsg cheers, Alex
  21. You can also run both the client and the server on the SAME PC, makes debugging real easy cheers, Alex.
  22. aledain

    TCP/IP

    LabVIEW Technical Resource : www.ltrpub.com/ But you'll need to order a back issure from them. cheers, Alex.
  23. Don't know anything about VB's callback, but the way I have done this in the past is to pass a reference to an occurrence into the DLL. The DLL code sets the occurrence and the LV code sits waiting for the occurrence to fire. Using the new custom events I bet you could get the setting of the occurrence to trigger a user event. // Note in pseudocode only, I cannot remember the real details eg. Public Event CallBack(stimer As Integer, occ As UnsignedInt32) ... Do While CSng(Timer) < (Start + Duration) If CSng(Timer) > (Start + Threshold) Then occ=1 ... BTW, all the code in the DLL could be more easily in LV without the pain. Is there any real reason to do it in VB instead Depending on what you want to do you could use loop timers, implement a QDSM, implement an event loop or ... cheers, Alex.
  24. Don't think you can hide individual tabs, but you can disable them. Look for the appropriate property node. You can also hide all the tabs using a right mouse click and selecting the appropriate drop down menu. As a workaround you could hide all the tabs and display just a string indicator of which is the current TAB above the top left corner. cheers, Alex
  25. aledain

    TCP/IP

    You can send all of these across the one connection. The trick here is to add a header byte to each packet so that your receiver (client) knows what to do with the packet. eg 0=synchro, 1=ttl, 2=audio, etc). Just a note on the audio: you will need to buffer the audio on the client side to avoid having playback that has gaps in it. There is an interesting article in one of the recent LTR issues dealing with LabVIEW, sound and gapless buffered sound playback using the native LV sound VIs. Well worth a read. cheers, Alex.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.