Jump to content

aledain

Members
  • Posts

    113
  • Joined

  • Last visited

    Never

Everything posted by aledain

  1. Which I am not sure has ever been really denied. What was very innovative was the iteration and conditional structures, and in particular handling of memory in the data flow (see LabVIEW Graphical Programming, 3rd ed.,Chapter 1, p14 & p21-22). I think many of NIs patents probably protect these algorithms (but I could be way wrong ;-)). cheers, Alex.
  2. Actually GOOP has been hidden in LabVIEW since version 6. Furthermore, many of the native LabVIEW VIs or function nodes are objects (such as queues and the lower file IO). However what we really want is to be able to click on a GOOP wire and have a probe describe the data for that instance of an object. Maybe that will appear with 8! cheers, Alex.
  3. This is the LabWindows runtime DLL. Your program probably uses cvirte.dll directly (even though you didn't realise it) so the version you have on your development machine does not match the one on the target machine. Check the version on each machine (time and date stamp, plus you can query the dll itself for version information using a WinAPI call). Another way to get this information is via MAX, either under "Software" or the "About" (really old versions). The DLL is usually located in the \windows\system directory. The cvirt.dll does get updated when certain NI components get installed by different applications. A common one that changes the cvirt.dll is datasocket, so having different datasocket versions on machines may have led to different cvirt.dll's. Try getting the same version of all your driver software as well. I think I was able to solve this by manually copying the DLL from the development machine to the target machine, but I cannot remember if it fixed the problem. One other thought is if you are using InPort or OutPort, you sometimes get a cvirt.dll problem. Remember that you cannot use these VIs without installing a patch under NT/W2K machines. cheers, Alex.
  4. Please send credit card details to alex@surely.u.r.joking.com. Alternatively, prehaps describe the problems you are having getting started. The GettingStarted tutorial in LabVIEW is a VERY good place.
  5. And can we have this in 6.02 form as well please? cheers, Alex.
  6. No. You can only save 7.1 back to 7 and then open the VIs under 7 to save back to 6.1. Some generous person on this group may help you out with this if you ask nicely. Unfortunately I don't have the versions required. cheers, Alex
  7. Try deleting bits of the diagram one at a time (or adding bits of your code one at a time) until the error disappears/appears. Are you calling any DLL's? They can be nasty if you do not have the correct data structures passed to them?
  8. As an after thought "style" in LV can mean many things but in general the most important are the general left to right readability, don't wire backwards and, of course, good data flow. cheers, Alex.
  9. This one's a biggy ;-) It does not affect the cross platform compile of the code AT ALL. It actually depends on the Fonts (and therefore the language I guess) in how the clusters and property nodes are displayed in the block diagram (BD) and front panel (FP). Both the fonts you have on your machine and the fonts set in the LV environment have an impact. For example if you leave the fonts alone in the VI, then it will use (variously) the SystemFont, DialogFont and ApplicationFont defaults, depending on what you are looking at (ie control, proprty node, etc). Now if the VI you have opened was "aligned" with a different font then you can expect that some of the nodes will have moved or changed shape if your font does not match theirs. You can always set your font defaults to be Arial,14 which is the most cross platform of fonts, and as long as you instruct your develoment buddies to do the same the BD and FP controls won't look too much different across platforms. OTOH if you use AWeirdArcana font, and line everything up, then the BD and FP can look a little odd when opened under a standard LV system, with its default font selection. N.B. You can force the fonts in a built application by including the same font tags copied from labview.ini in your applications ini file.
  10. Prehaps try a simople string control (moved just off screen) and a listbox. Set the string control to update while typing and process each keystroke, passing the "result" you want into the listbox. You can read \n for the enter. I don't know how you would handle tab, since tabbing usually moves the focus. But, I think you can limit focus (but removing all other controls from the tab list via Advanced:Key Navigation:Skip control while tabbing ), and perhaps in 7+ you can programatically control focus. Using String:NumberToFractionalString will return a number with the system decimal point in the string. Parse this to get the current system decimal point. eg wire 7.0 as a constant and you should get 7,0 back.
  11. For most (every?) control there is a property node that you can set (Blinking = True). RMB on the control (CreatePropertyNode) and select the "Blinking" attribute. OR, if you're asking how to make a boolean toggle its state, create a local variable set the "NOT" at some time interval.
  12. This is pure speculation, but maybe there is an "update" flag on the DSC side (and maybe on the PLC side) which means that it (they) continually send ALL the tags. What you really want is to just read/write only those that change. Perhaps there is a configuration on the DSC/PLC that only notifies the other on change? As I said, pure speculation and musing outloud. cheers, Alex.
  13. Doesn't this breach the license agreement? I didn't think you're allowed to "build" executables without the Application Builder, and by definition running the source code using the runtime would bypass the build and be doing just that. cheers, Alex.
  14. Is is in a particular VI or when you do something? It might be a corrupt VI and deleting the offender might solve the problem. Sometimes deleting theoffending object from the VI can fix these sort of problems. cheers, Alex.
  15. I presume that you are using software timed loops here to adjust the sample rate. If you went to a hardware timed loop the resources would be freed up considerably. I would probably over sample and decimate the data instead, then your system does not need to ramp up in response to the signal, just the amount of decimation would decrease and the processing increase on the "affected" channel of interest. I have assumed that the 21 channels are all on the same machine so you have 6 x 21 channels? So on one machine there are 21 channels @ 5 Hz x 8 bytes (DBL) = 840 bytes/S. That's almost nothing! Well almost. I reckon sample at 1 kHz x 21 channels x 8 bytes = 168k (which could be halved if you convert to SGL which is probably good enough for th real world). However I cannot recall the processor in the 8175 - if it's the P-200 then it might struggle a bit with this many channels (perhaps ... perhaps not), but anything over a P-500 will cope. Anyway this is only a small amount of memory. So DAQ acquire and decimate by 200 to get your 5 Hz basic signal, any interesting signal can be "sent' to the DS server at full 1kHz rate for processing (in timestamped chunks with probably a mahine ID so you know which set point to adjust on the DS machine), adjusted/processed set points can be read back from the DS server. This now reduces 21 channels to the same loop. Your PID loop can probably still be this same loop, ie make this loop time critical, but the DS communication needs to be done in a separate loop. Alternatively I would move the PID loop to a separate time critical loop. This loop would monitor the set point set by the DS server, the current value (passed by global or whatever) from the DAQ loop with any user control available from the . I have assumed that the PID is running at 1 Hz? If the PID is faster ie >10-100 Hz then this architecture will suffer (but there are ways of beefing it up). (see diagram) Is it still deterministic? Almost, but not truly because your value read in the DAQ loop will take time to shunt from DAQ to PID loop. Is this a problem? That will depend on your application. If it's a slow manufacturing process (seconds to minutes), I would say it's going to be negligible. Now about the hardware DAQ, probably do a buffered DAQ rather than wait for 1k samples because a waiting type DAQ uses the CPU to monitor the arrival of the 1000 samples. A buffered DAQ will not use CPU and you can poll for your 1k samples. When you are buffering wait until >=1k is reached, shunt the remainder (N-1k) around the loop (shift register) for the next acquire and process the 1k samples as needed. You will need to tune this loop to run at better than 4 per second depending on the critical lag it introduces into the PID loop (you might be able to set it as low as 10ms therby getting an updated PID value in the PID loop within ~10-15 ms depending on the processing/decimation overhead. Other comms loops (ie RS-232) could be added as separate loops. The DS Server could also have a client on it that write to the DS Server allowing manual adjustment, debugging etc.
  16. I might be missing the point here. Could you not read the data from a central store? Either the SQL database or a new machine using Datasocket server to "pool" the resources. I would use the DS Sever because SQL might be a bit sucky if things start to happen everywhere quickly. The PXI-RT machines can then obtain their resources from the central machine (which could have some smarts in it too, like resolving conflicts, etc; but could be as simple as a host for the DS Server) . Another computer on the network is a cheap option these days.
  17. You need to provide some more information. The code or a picture of the code will help. Also can you describe why it goes slow - is the acquisition itself slow (ie wrong), is your computer running slow, etc ... cheers, Alex.
  18. Ohh, you tease (fair enough though). cheers, Alex
  19. aledain

    FP-RLY-420

    You don't say whether you can write to the other 4 channels. If you can't write to them either I'd check your configuration string in the code. It's very easy to have one too many or too few spaces in the tag string. Other things to do are make sure that the FP explorer IAK file and the system are in 'sync'. Opening Max will usually tell you - I presume you meant FP 4+ which runs using Max 3.01, rather than FP 3.01 which is a very old version (you can tell the difference if Max launches it's FP 4+, if you get a blueish FP explorer and have to open Fp explorer from the Start/Programs menu, then it's an old version). I'd upgrade to the lateset FP version if this is the case. cheers, Alex.
  20. Check out this link for a robust serial reader/writer ... SerialComms .. it'll give you some pointers if you burrow into the code. cheers, Alex
  21. Re-installing the software might work. If you haven't installed VISA into your 6i installation then it might be a matter of un-installing the VISA driver (from both 7 and 6i) and installing it again. Is the 6i version student also? I seem to remember that the student versions used to be limited (eg cannot do DAQ, cannot do active X, cannot do ..., etc). I would check with NI whether the student version allows you to do VISA comms at all. cheers, Alex.
  22. Maybe not, but at the very least shouldn't they be writing them?
  23. Sometimes if other applications "grab" the com port, they don't release it elegantly. As well as Michael's cable connections I'd check that no other app uses the com and doesn't release it when it exits. e.g does the motor controller come with some of it's own software? Are your users doing something? I note that there is no timing in the loop above and sometimes if users enable screensavers or power saving, these can affect control systems. You'll get the "it was working fine for 20 minutes and then stopped - it must be your software. Actually, it's because their simpsons 3D open GL screen save with cool sound effects consumes 90% of the system resources ;-)
  24. That's because it stops us old farts discriminating against you young bucks :laugh: Couldn't we geekify this a little and instead use our "birthday" as the first day we had a program run on a 'computer' (other than an abacus or calculator). I'll chip in and say mine was 1978. They were punch cards and I had to post them (that's snail mail) to the computing centre where it would be executed and a week later the cards, listing and results would be posted back. Aaah the good old days when desk checking your syntax and your algorithm actually meant something ... cheers, Alex.
  25. The error code is telling you that it is NOT necessarily the cable but much more likely that VISA is not installed correctly. Warning -1073807202 occurred at an unidentified location. Possible reasons: VISA: (Hex 0xBFFF009E) A code library required by VISA could not be located or loaded.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.