-
Posts
4,881 -
Joined
-
Days Won
296
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
I've been in the same boat many times. the real problem (as I saw it) was that I was (end-of-line) so once something was achieved it was then "how do we test it? Lets write some software!" It was re-active rather than pro-active development. After all "its ONLY software and that takes 2 minutes.... right? Unfortunately that kind of thinking takes a long time to change and is really the domain of a "Test Engineer" rather than a "Software Programmer" since a test engineer has detailed knowledge of the products and how to test them from a very early stage and is privy to spec changes very early on. Sounds like "departmental expansion" is the route. You are the bottle-neck so you need resources to overcome it. Are you the only programmer?
-
Can't you use an equivelent card that is beter supported?
-
They have an English version (top right of the page) but a quick search didn't revel anything and without going through their entire catalogue....... You can find the memory location and slot information from device manager. But that probably won't help much since under windows (you are using windows?) direct memory access to memory mapped IO is not possible without "Kernel Mode" drivers so it probably isn't even recognised. But I would be very surprised if it was memory mapped. You sure its not an ISA card?
-
As generality/stereotype....yes. But not because of what he probably thinks. Is there a full video?
-
Yes. Welcome to the real world But OOP makes that easy, right? Sorry. couldn't resist. They probably need 5 minute tools (as I call them). Discardable software that doesn't come under formal control, is quick to implement (5-30 mins) and usually provided by one of the members in the department that "likes" programming. You have anyone like that? As an example. One of our machines was playing up. We thought it was temperature related. So we decided to monitor the temperature. So I took one of the graph examples. Replaced the sig gen vi with a DAQ one and added a save to file. It took 5 mins max. I then compiled it, copied to the machine and pressed the run arrow (no fancy user interfaces, hard-coded DAQ channel) and we all went home. Next day, came in and analysed the file, found the fault , ran the logger again to check and once everything was fine, removed it. It wasn't part of the "real software". It wasn't meant to be re-used. It was just a quick knock-up tool to log data for that specific scenario.
-
So. He is saying the internal processing already accounted for the gain in the reading which you negated by including it in your calculation. Sounds familiar
-
Well. I'm no DSP expert. But that seems a bit simplistic, possibly a rule-of-thumb?. What do they mean by digitised signal accuracy? If you mention accuracy to me I think in terms of a compound of additive errors (as you can see from your calculation example which is derived in terms of temperature, reading an offset). I'm aware of aperture, quantization and clock errors for ADCs. Possibly he/she is referring to those in a general way. But those are mainly measured in bits rather than voltage, so it depends on your range rather than gain. What exactly are you trying to get to? You have the measurement accuracy of your system. You have the codeword size. These I can understand would be important to you for measuring temperature. Are you trying to break down the accuracy into every single error contributor in the system ? If so. this could be a very, very long thread
-
Your input value is still 0.001203. Gain is not included in this calculation, only the reading which already has the gain applied by the internal processing of the device. This is a "black-box" calculation. Subsequently your calculated value is in error by a factor of 100.
-
Nothing unusual there. Paypal have some very questionable if not downright illegal policies. Paypal Sucks
-
Here's fine.
-
1/(65356 x 10) = 1.53E-6 uv so yes. the codeword is correct. Not quite. Thermocouples are non-linear. The K-type is especially wobbly around 0°C. You need to use thermocouple tables (or polynomial approximations) to calculate the temperature for a particular voltage. But for the K type your analysis is correct but only for that temperature. Don't assume that it will remain at that as you increase and decrease in temperature. Thermocouples produce very small voltages. You can see this from your Thermocouple range (1.4-.1.2)/ 30 = 0.04 mv/°C. This is why they use characterisation tables rather than an approximation. Its very important to minimise errors and introduce compensation if possible if you are looking for accuracy. Take a long hard look at your hardware spec (noise, temperature stability etc) and make sure it is capable
-
Manufacturer? Model? Type?
-
Tell me what you need and I'll tell you how to get along without it
- Show previous comments 1 more
-
"That's not an arguement, you are simply contradicting me!"
"No I'm not"
"Yes you are!"
...
Monty Python's Flying Circus
-
You already have most the information to calculate the code width. Look at the spec for your device again and find the resolution. code width = range/(gain x resolution)
-
Touche It was just to show branching. What the numbers are is irrelevant. that's why I don't understand your difficulty with reading one value and showing another. I could just as easily read an int and displayed a dble. But anyway....... Just saving the ADC won't give you more precision. In fact, the last bit (or more) is probably noise. Its the post processing that gives a more accurate reading. You usually gain 1/2 a bit of precision and with post-processing like interpolation and averaging, significant improvements can be made (this is quite a good primer). What s the obsession with saving the ADC? Now. From your n and m descriptions, I'm assuming you're thinking nxm configurations (is that right?). But. You don't care what the sensor is only that it has an analogue output which you can measure. You can't log data from nxm devices simultaneously because you only have m channels. So you only have to configure m channels (or the engineers do at least).. If you allow them to make a new task every-time they change something, the list of tasks in MAX very quickly becomes un-manageable. We use 192 digital IOs for example. Can you imagine going through MAX and and creating a Task for each one? What you are describing is a similar problem we have with part numbers. Its a management issue rather than a programming one. We (for example) may have 50 different part numbers, all with different test criteria (different voltage/current measurements, excitation voltages, pass-fail criteria etc, etc). But they all use the same hardware of course, otherwise we couldn't measure it. So the issue becomes how can we manage lots of different settings for the same hardware. Well. One way is a directory structure where each directory is named with the part number and contains any files required by the software (camera settings, OCR training files, DAQ settings, ini-files, pass/fail criteria....maybe 1 file, maybe many). The software only needs to read the directory names and hey presto! Drop down list of supported devices. New device? New directory. You can either copy the files from another directory and modify, or create a fancy UI that basically does the same thing. Need back-ups? Zip the lot Need change tracking? SVN! Another is a database which takes a bit more effort to interface too (some think its worth it), but the back-end for actually applying the settings is identical. And once you've implemented it you can do either just by using a case statement. What you will find with the NI products, is that really there are't that many settings to change. Maybe between current loop/voltage and maybe the max/min and you will be able to measure probably 99% of analogue devices. Do they really need to change from a measurement of 0-1V when 0-5v will give near enough the same figures (do they need uV accuracy?) Or will mV do! Don't ask them, you know what the answer will be ). Do we really need to set a 4-20ma current loop when we can use 0-20 (its only an offset start point after all.). Indeed. And I would much rather spend my programming time making sure they can play with as little as possible, because when they bugger it up, your software will be at fault You'll then spend the next week defending it before they finally admit that maybe they did select the wrong task
-
use XML with XML Response of a web service
ShaunR replied to carri's topic in Remote Control, Monitoring and the Internet
Use the javascript "html_entity_decode" function. html_entity_decode(string) Normal chars will remain unaffected but &alt etc will be converted. Damn. Now I'm a text heretic -
Wrong site. It's rep-points here That's what I mean. These are mutually exclusive? Yes of course you can. But it depends if its the horse driving the cart or the other way round. As soon as you start putting code in that needs to read MAXs config so you know how to interpret the results, you might as well just make it a text file that they can edit in notepad or spreadsheet program and when you load it you already have all the information you need without having to read it all from MAX. Otherwise you have to first find out what tasks there are and depending on what has been defined (digital AI AO?), put switches in your code to handle the properties of the channels.However if you create the channels on the fly, you don't need to do all that. It also has the beneficial side effect that if you can do things like switch from a "read file.vi" to a a "read Database" vi (oops. I meant Read Config Class ) with little effort. However, if they are just "playing" then you are better off telling them to use the "Panels" in MAX.
-
Only 'cos you haven't written it......yet
-
All devices have different address maps for the PV. You will need to read the manual.
-
Only in Labview 2010. The format is https:// and "verify" must be set to "TRUE"
-
A lot of the information is stored in the registry. So a quick and dirty way would be to find it there.
-
Or un-check the "show warnings" when viewing this library Interestingly. If you change something (like mechanical action or the unbundle names). The warnings disappear............until you save it Think you may have found a feature.
-
OK. Here are some of my FOR (nots) using MAX. MAX is never installed it just bloats the installation and if it crashes, will take your whole measurement system down and you will get the telephone call not NI. MAX already has an interface which doesn't fit with either our or our customers "corporate" style requirements for software (logos etc) And having a GUI is normally easier for a customer to navigate and that's the last thing we want since they are neither trained or qualified to do these operations and we cannot poke-yoke it. MAX includes the ability to handle Scales of different types (linear, non-linear etc...) - but this cannot be updated from integrated databases and other 3rd party storage. Communicating with MAX is really easy using the Task-based API (through PNs etc...) because MAX sits in top of DAQmx so what we are really doing is configuring DAQmx. (So far) Clients seem to like using MAX - do they have an alternative? Its easy to back up your configuration and port it over to another PC etc. as it is with any other file based storage except text based files you can track in SVN. And some more.... Have to support more 3rd party software for which there is no source and have no opportunity to add defensive code for known issues Requires a MAX installation to do trivial changes as opposed to software available on most office machines (such as Excel, notepad etc). Does not have the ability to easily switch measurement types, scaling etc to do multiple measurements with the same hardware. MAX requires firewall access (I think) and this can be an issue with some anally retentive IT departments that decide to push their policies on your system.. As mentioned before above. Cannot integrate 3rd party storage such as SQL, Access, SQLLite, databases (mentioned again because it is a biggie). Or indeed automated outputs from other definitions (like specs) MAX assumes you have a mouse and keyboard. Its very difficult to use with touch-screens operated by gorillas with hands like feet.. But I think our customers are probably a bit different. They don't want to "play". they just want it to work!. And work 7 days a week, 24hrs a day. We even go to great lengths to replace the "Explorer" shell and start-up logo so operators aren't aware that its even windows. Our system is quite sophisticated now though. It can configure hardware on different platforms using various databases, text files, specification documents etc and it can be invoked at any time to reconfigure for different tests if there are different batches/parts. Its probably the single most re-used piece of code across projects (apart form perhaps the Force Directory vi... I tend to view MAX in a similar vein to express VIs. But that's not to say I never use it.
-
Give it a bash. I think you'll like it (drop it below 10ms or try about 10MB of data and see what happens). Then benchmark it against the Dispatcher
-
Agree with most of that. But especially the above. We usually create the channel associations at run-time in a similar manner to this: