Jump to content

ShaunR

Members
  • Posts

    4,846
  • Joined

  • Days Won

    291

Everything posted by ShaunR

  1. I'm not sure what you were reading on the ni website. But I think you'll find you may need a wireless router. If you are using windows 7 you can turn your laptop into one by using this
  2. Indeed. So Lets say you use MAX. They create 24 "tasks", set up values for scaling and calibrate each channel (probaly a good 1/2 days work). Then they want to change something on one of the tasks. Do they modify the original task? Or do they create a new task, set up the new scales and re-calibrate on the premiss that they don't want to "lose" the old settings because they might come back to it?.So now we may have 48 tasks Lets say they they keep to 24 tasks. Then they come to you and say "right, we want a piece of software that logs tasks 1,3,5,and 9 except on Wednesday, when we'll be using 1,6,12,8". How do you store that information in MAX? That's up to you. You'r the only one that knows what tests and what's required. I think what you will find (personally) is that you start off using MAX then as things progress you will find more and more you need to have more external control until you reach a point where you have so much code just to get get around MAX that it is no longer useful and, in fact, becomes a hindrance. But by that time you are committed. Thats just my personal experience and others may find it different. We actually use several files. One for Cal data, one for general settings (graph colours, user preferences etc, etc), one for each camera (there can be up to 5), one for DAQ (basic config) once for drive config and one for test criteria. The operator just selects a part number (or a set of tests if you like) from a drop down list and can either run full auto, or run specific test from another drop down list filtered for that product (well. Not filtered since it it just showing the labels in the test criteria file ). But having a directory structure makes that really easy, since all it is doing is selecting a set of files.I think that type of interface would be a bit further down your life-cycle. But the building blocks started out just as you are currently doing and all that we did was put them altogether in a nice fancy user interface (it even uses sub-panels to show some of the original UIs we created when testing the subsystems).
  3. I think you are just intimidated by the fact you have not used it before. 30 minutes and a few examples (there are lots) with a usb DAQ should be enough. You will quickly discover its not really that different from using VISA, TCPIP IMAQ or any other "open, do something close". Heck. Even use the express VIs and you will have most of the functionality of MAX.
  4. Couldn't agree more. There's nothing more annoying to me when I see a piece of code that I'm interested in to find out, I have to download VIPM then the RCF and also install 5 other openG libraries that I neither want or use. I wonder how many people actually read all the licensing and actually do distribute the source code, licensing and associated files when they build an application with 3rd party tools? (not necessarily openG) Might be a good poll
  5. Take a look at Data Client.vi and Data Server.vi in the NI examples.
  6. Well. You never know. Its a bit like mathematicians. there are pure mathematicians and applied mathematicians. Pure mathematicians are more interested in the elegance of arriving at a solution whereas applied mathematicians are more interested in what the solution can provide. Well you've got the control and the expertise. But maybe not the tool-kit that comes from programming in those positions But back to MAX. I (and the engineers that use the file system) just find it much quicker and easier to maintain and modify. Like I said. We have lots of IO (analogue and digital) and find MAX tedious and time-consuming.A single excel spreadsheet for the whole system is much easier. And when we move to another project we don't have to change any configuration code, just the spreadsheet which can be done by anyone more or less straight from the design spec (if there is one ). But you know your processes. A man of your calibre I'm sure will look at the possible alternatives and choose one that not only fixes the problem now, but is scalable and will (with a small hammer) fit the tomorrow.
  7. Yes. Take a look at Data Client.vi and Data Server.vi in the NI examples.
  8. If it was in Oz I probably would have
  9. Weird. Upload fails if I change from quick to full edit. But straight reply is fine.
  10. Take a look at Data Client.vi and Data Server.vi in the NI examples. It uses 1 channel. The client sends back the letter Q to the server (on the same connection) to stop the server sending data. Oh. And you can get the IP address by using "IpTostr" and "StrToIP" instead of executing IPconfig and formatting the result. (I'd post a picture, but for some reason image uploading is failing)
  11. I fail to see where in Absolute Accuracy = +/-[(Input Voltage x % of Reading)+ Offset +System Noise +Temperature Drift] gain is used since it is a sub-component of "Reading". I took your word on the 100+5.14 since I didn't have that info (neither could I find the 28,9 +2,75 in the spec pages you pointed me to (which is where the 70mv lies) if that is the "system noise" and offset) . But it was glaring obvious that 0.1203 was incorrect. Perhaps I should have said "about 100" But you have an answer you are happy with so that's good.
  12. You are quite right. It is the synergy between their hardware and the software (sometimes we forget Labwindows) that makes them the obvious choice. And one of the main reasons Labview is as successful as it is is because. It turns a software engineer into a systems engineer (much more useful ) However, if all you need is a dumb remote analogue or digital device then the cost of cRIO or field-point cannot be justified ($2000-$4000) against a $200 ethernet device from another well known manufacturer. But having said that, I think it has more to do with confidence and experience than anything else.I am comfortable interfacing to anything in any language (but I will fight like buggery to use Labview ). If someone has only used labview and only knows labview products, then its a low risk, sure bet..
  13. The most common cause (I've found) of this behaviour is that memory allocated from the Labview (i.e outside the DLL) is freed inside the DLL. When the function returns, the original pointer Labview used for allocation is non-existent.. If the DLL does not return an error exit code, Labview assumes everything was ok and attempts to use it again (i think). A normal app would show you a GPF, but Labview is a bit more robust than that (usually) and normally gives an error. But it depends how catastrophic it was. You probably need exception handling for your dll, so that any GPF's or nasty C stuff, that breaks your DLL, still cleanly returns to labview. this is usually done in the DLL_PROCESS_DETACH of DllMain. This will mean that at least Labview will stay around for you to debug the DLL to find the root cause. However. If the error affects the program pointer. Then nothing short of fixing the problem will suffice. Rolf is the expert on this kind of stuff.
  14. Well. Doesn't sound too bad. 3 people should be able to support a number of production environments. You have a predictable time-scale for implementation that can be planned for and you use an iterative life cycle. Which one of your team came from production?
  15. I've been in the same boat many times. the real problem (as I saw it) was that I was (end-of-line) so once something was achieved it was then "how do we test it? Lets write some software!" It was re-active rather than pro-active development. After all "its ONLY software and that takes 2 minutes.... right? Unfortunately that kind of thinking takes a long time to change and is really the domain of a "Test Engineer" rather than a "Software Programmer" since a test engineer has detailed knowledge of the products and how to test them from a very early stage and is privy to spec changes very early on. Sounds like "departmental expansion" is the route. You are the bottle-neck so you need resources to overcome it. Are you the only programmer?
  16. Can't you use an equivelent card that is beter supported?
  17. They have an English version (top right of the page) but a quick search didn't revel anything and without going through their entire catalogue....... You can find the memory location and slot information from device manager. But that probably won't help much since under windows (you are using windows?) direct memory access to memory mapped IO is not possible without "Kernel Mode" drivers so it probably isn't even recognised. But I would be very surprised if it was memory mapped. You sure its not an ISA card?
  18. As generality/stereotype....yes. But not because of what he probably thinks. Is there a full video?
  19. Yes. Welcome to the real world But OOP makes that easy, right? Sorry. couldn't resist. They probably need 5 minute tools (as I call them). Discardable software that doesn't come under formal control, is quick to implement (5-30 mins) and usually provided by one of the members in the department that "likes" programming. You have anyone like that? As an example. One of our machines was playing up. We thought it was temperature related. So we decided to monitor the temperature. So I took one of the graph examples. Replaced the sig gen vi with a DAQ one and added a save to file. It took 5 mins max. I then compiled it, copied to the machine and pressed the run arrow (no fancy user interfaces, hard-coded DAQ channel) and we all went home. Next day, came in and analysed the file, found the fault , ran the logger again to check and once everything was fine, removed it. It wasn't part of the "real software". It wasn't meant to be re-used. It was just a quick knock-up tool to log data for that specific scenario.
  20. So. He is saying the internal processing already accounted for the gain in the reading which you negated by including it in your calculation. Sounds familiar
  21. Well. I'm no DSP expert. But that seems a bit simplistic, possibly a rule-of-thumb?. What do they mean by digitised signal accuracy? If you mention accuracy to me I think in terms of a compound of additive errors (as you can see from your calculation example which is derived in terms of temperature, reading an offset). I'm aware of aperture, quantization and clock errors for ADCs. Possibly he/she is referring to those in a general way. But those are mainly measured in bits rather than voltage, so it depends on your range rather than gain. What exactly are you trying to get to? You have the measurement accuracy of your system. You have the codeword size. These I can understand would be important to you for measuring temperature. Are you trying to break down the accuracy into every single error contributor in the system ? If so. this could be a very, very long thread
  22. Your input value is still 0.001203. Gain is not included in this calculation, only the reading which already has the gain applied by the internal processing of the device. This is a "black-box" calculation. Subsequently your calculated value is in error by a factor of 100.
  23. Nothing unusual there. Paypal have some very questionable if not downright illegal policies. Paypal Sucks
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.