Jump to content

ShaunR

Members
  • Posts

    4,914
  • Joined

  • Days Won

    301

Everything posted by ShaunR

  1. If it was in Oz I probably would have
  2. Weird. Upload fails if I change from quick to full edit. But straight reply is fine.
  3. Take a look at Data Client.vi and Data Server.vi in the NI examples. It uses 1 channel. The client sends back the letter Q to the server (on the same connection) to stop the server sending data. Oh. And you can get the IP address by using "IpTostr" and "StrToIP" instead of executing IPconfig and formatting the result. (I'd post a picture, but for some reason image uploading is failing)
  4. Well. they certainly look OK to me
  5. I fail to see where in Absolute Accuracy = +/-[(Input Voltage x % of Reading)+ Offset +System Noise +Temperature Drift] gain is used since it is a sub-component of "Reading". I took your word on the 100+5.14 since I didn't have that info (neither could I find the 28,9 +2,75 in the spec pages you pointed me to (which is where the 70mv lies) if that is the "system noise" and offset) . But it was glaring obvious that 0.1203 was incorrect. Perhaps I should have said "about 100" But you have an answer you are happy with so that's good.
  6. You are quite right. It is the synergy between their hardware and the software (sometimes we forget Labwindows) that makes them the obvious choice. And one of the main reasons Labview is as successful as it is is because. It turns a software engineer into a systems engineer (much more useful ) However, if all you need is a dumb remote analogue or digital device then the cost of cRIO or field-point cannot be justified ($2000-$4000) against a $200 ethernet device from another well known manufacturer. But having said that, I think it has more to do with confidence and experience than anything else.I am comfortable interfacing to anything in any language (but I will fight like buggery to use Labview ). If someone has only used labview and only knows labview products, then its a low risk, sure bet..
  7. The most common cause (I've found) of this behaviour is that memory allocated from the Labview (i.e outside the DLL) is freed inside the DLL. When the function returns, the original pointer Labview used for allocation is non-existent.. If the DLL does not return an error exit code, Labview assumes everything was ok and attempts to use it again (i think). A normal app would show you a GPF, but Labview is a bit more robust than that (usually) and normally gives an error. But it depends how catastrophic it was. You probably need exception handling for your dll, so that any GPF's or nasty C stuff, that breaks your DLL, still cleanly returns to labview. this is usually done in the DLL_PROCESS_DETACH of DllMain. This will mean that at least Labview will stay around for you to debug the DLL to find the root cause. However. If the error affects the program pointer. Then nothing short of fixing the problem will suffice. Rolf is the expert on this kind of stuff.
  8. Well. Doesn't sound too bad. 3 people should be able to support a number of production environments. You have a predictable time-scale for implementation that can be planned for and you use an iterative life cycle. Which one of your team came from production?
  9. I've been in the same boat many times. the real problem (as I saw it) was that I was (end-of-line) so once something was achieved it was then "how do we test it? Lets write some software!" It was re-active rather than pro-active development. After all "its ONLY software and that takes 2 minutes.... right? Unfortunately that kind of thinking takes a long time to change and is really the domain of a "Test Engineer" rather than a "Software Programmer" since a test engineer has detailed knowledge of the products and how to test them from a very early stage and is privy to spec changes very early on. Sounds like "departmental expansion" is the route. You are the bottle-neck so you need resources to overcome it. Are you the only programmer?
  10. Can't you use an equivelent card that is beter supported?
  11. They have an English version (top right of the page) but a quick search didn't revel anything and without going through their entire catalogue....... You can find the memory location and slot information from device manager. But that probably won't help much since under windows (you are using windows?) direct memory access to memory mapped IO is not possible without "Kernel Mode" drivers so it probably isn't even recognised. But I would be very surprised if it was memory mapped. You sure its not an ISA card?
  12. As generality/stereotype....yes. But not because of what he probably thinks. Is there a full video?
  13. Yes. Welcome to the real world But OOP makes that easy, right? Sorry. couldn't resist. They probably need 5 minute tools (as I call them). Discardable software that doesn't come under formal control, is quick to implement (5-30 mins) and usually provided by one of the members in the department that "likes" programming. You have anyone like that? As an example. One of our machines was playing up. We thought it was temperature related. So we decided to monitor the temperature. So I took one of the graph examples. Replaced the sig gen vi with a DAQ one and added a save to file. It took 5 mins max. I then compiled it, copied to the machine and pressed the run arrow (no fancy user interfaces, hard-coded DAQ channel) and we all went home. Next day, came in and analysed the file, found the fault , ran the logger again to check and once everything was fine, removed it. It wasn't part of the "real software". It wasn't meant to be re-used. It was just a quick knock-up tool to log data for that specific scenario.
  14. So. He is saying the internal processing already accounted for the gain in the reading which you negated by including it in your calculation. Sounds familiar
  15. Well. I'm no DSP expert. But that seems a bit simplistic, possibly a rule-of-thumb?. What do they mean by digitised signal accuracy? If you mention accuracy to me I think in terms of a compound of additive errors (as you can see from your calculation example which is derived in terms of temperature, reading an offset). I'm aware of aperture, quantization and clock errors for ADCs. Possibly he/she is referring to those in a general way. But those are mainly measured in bits rather than voltage, so it depends on your range rather than gain. What exactly are you trying to get to? You have the measurement accuracy of your system. You have the codeword size. These I can understand would be important to you for measuring temperature. Are you trying to break down the accuracy into every single error contributor in the system ? If so. this could be a very, very long thread
  16. Your input value is still 0.001203. Gain is not included in this calculation, only the reading which already has the gain applied by the internal processing of the device. This is a "black-box" calculation. Subsequently your calculated value is in error by a factor of 100.
  17. Nothing unusual there. Paypal have some very questionable if not downright illegal policies. Paypal Sucks
  18. Here's fine.
  19. 1/(65356 x 10) = 1.53E-6 uv so yes. the codeword is correct. Not quite. Thermocouples are non-linear. The K-type is especially wobbly around 0°C. You need to use thermocouple tables (or polynomial approximations) to calculate the temperature for a particular voltage. But for the K type your analysis is correct but only for that temperature. Don't assume that it will remain at that as you increase and decrease in temperature. Thermocouples produce very small voltages. You can see this from your Thermocouple range (1.4-.1.2)/ 30 = 0.04 mv/°C. This is why they use characterisation tables rather than an approximation. Its very important to minimise errors and introduce compensation if possible if you are looking for accuracy. Take a long hard look at your hardware spec (noise, temperature stability etc) and make sure it is capable
  20. Manufacturer? Model? Type?
  21. Tell me what you need and I'll tell you how to get along without it

    1. Show previous comments  1 more
    2. ShaunR

      ShaunR

      ....and change the things I cannot accept

    3. Grampa_of_Oliva_n_Eden

      Grampa_of_Oliva_n_Eden

      "That's not an arguement, you are simply contradicting me!"

      "No I'm not"

      "Yes you are!"

      ...

      Monty Python's Flying Circus

    4. ShaunR

      ShaunR

      Come and see the violence inherent in the system! Help, help! I'm being repressed!

  22. You already have most the information to calculate the code width. Look at the spec for your device again and find the resolution. code width = range/(gain x resolution)
  23. Touche It was just to show branching. What the numbers are is irrelevant. that's why I don't understand your difficulty with reading one value and showing another. I could just as easily read an int and displayed a dble. But anyway....... Just saving the ADC won't give you more precision. In fact, the last bit (or more) is probably noise. Its the post processing that gives a more accurate reading. You usually gain 1/2 a bit of precision and with post-processing like interpolation and averaging, significant improvements can be made (this is quite a good primer). What s the obsession with saving the ADC? Now. From your n and m descriptions, I'm assuming you're thinking nxm configurations (is that right?). But. You don't care what the sensor is only that it has an analogue output which you can measure. You can't log data from nxm devices simultaneously because you only have m channels. So you only have to configure m channels (or the engineers do at least).. If you allow them to make a new task every-time they change something, the list of tasks in MAX very quickly becomes un-manageable. We use 192 digital IOs for example. Can you imagine going through MAX and and creating a Task for each one? What you are describing is a similar problem we have with part numbers. Its a management issue rather than a programming one. We (for example) may have 50 different part numbers, all with different test criteria (different voltage/current measurements, excitation voltages, pass-fail criteria etc, etc). But they all use the same hardware of course, otherwise we couldn't measure it. So the issue becomes how can we manage lots of different settings for the same hardware. Well. One way is a directory structure where each directory is named with the part number and contains any files required by the software (camera settings, OCR training files, DAQ settings, ini-files, pass/fail criteria....maybe 1 file, maybe many). The software only needs to read the directory names and hey presto! Drop down list of supported devices. New device? New directory. You can either copy the files from another directory and modify, or create a fancy UI that basically does the same thing. Need back-ups? Zip the lot Need change tracking? SVN! Another is a database which takes a bit more effort to interface too (some think its worth it), but the back-end for actually applying the settings is identical. And once you've implemented it you can do either just by using a case statement. What you will find with the NI products, is that really there are't that many settings to change. Maybe between current loop/voltage and maybe the max/min and you will be able to measure probably 99% of analogue devices. Do they really need to change from a measurement of 0-1V when 0-5v will give near enough the same figures (do they need uV accuracy?) Or will mV do! Don't ask them, you know what the answer will be ). Do we really need to set a 4-20ma current loop when we can use 0-20 (its only an offset start point after all.). Indeed. And I would much rather spend my programming time making sure they can play with as little as possible, because when they bugger it up, your software will be at fault You'll then spend the next week defending it before they finally admit that maybe they did select the wrong task
  24. Use the javascript "html_entity_decode" function. html_entity_decode(string) Normal chars will remain unaffected but &alt etc will be converted. Damn. Now I'm a text heretic
  25. Wrong site. It's rep-points here That's what I mean. These are mutually exclusive? Yes of course you can. But it depends if its the horse driving the cart or the other way round. As soon as you start putting code in that needs to read MAXs config so you know how to interpret the results, you might as well just make it a text file that they can edit in notepad or spreadsheet program and when you load it you already have all the information you need without having to read it all from MAX. Otherwise you have to first find out what tasks there are and depending on what has been defined (digital AI AO?), put switches in your code to handle the properties of the channels.However if you create the channels on the fly, you don't need to do all that. It also has the beneficial side effect that if you can do things like switch from a "read file.vi" to a a "read Database" vi (oops. I meant Read Config Class ) with little effort. However, if they are just "playing" then you are better off telling them to use the "Panels" in MAX.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.