-
Posts
4,942 -
Joined
-
Days Won
308
Content Type
Profiles
Forums
Downloads
Gallery
Posts posted by ShaunR
-
-
I have never understood those people who always go for the cheapest DAQ just because its the cheapest.
IMHO, the most powerful thing about NI is not their hardware and not LabVIEW (don't get me wrong these are both fantastic), its their Drivers - the connection between the IDE and the hardware.
Again, IMHO, for the extra cost upfront of going with more expensive hardware, you are going to save a bucket load of time (and therefore money) by having a reliable Driver set e.g. standardization/familiarity, support, flexibility, upgrades etc...
So a lot of the time (for what we do) it makes sense to choose NI. (However, we are also an Alliance partner, so we are bias)
You are quite right. It is the synergy between their hardware and the software (sometimes we forget Labwindows) that makes them the obvious choice. And one of the main reasons Labview is as successful as it is is because. It turns a software engineer into a systems engineer (much more useful
) However, if all you need is a dumb remote analogue or digital device then the cost of cRIO or field-point cannot be justified ($2000-$4000) against a $200 ethernet device from another well known manufacturer.
But having said that, I think it has more to do with confidence and experience than anything else.I am comfortable interfacing to anything in any language (but I will fight like buggery to use Labview
). If someone has only used labview and only knows labview products, then its a low risk, sure bet..
-
The most common cause (I've found) of this behaviour is that memory allocated from the Labview (i.e outside the DLL) is freed inside the DLL. When the function returns, the original pointer Labview used for allocation is non-existent.. If the DLL does not return an error exit code, Labview assumes everything was ok and attempts to use it again (i think). A normal app would show you a GPF, but Labview is a bit more robust than that (usually) and normally gives an error. But it depends how catastrophic it was.
You probably need exception handling for your dll, so that any GPF's or nasty C stuff, that breaks your DLL, still cleanly returns to labview. this is usually done in the DLL_PROCESS_DETACH of DllMain. This will mean that at least Labview will stay around for you to debug the DLL to find the root cause.
However. If the error affects the program pointer. Then nothing short of fixing the problem will suffice.
Rolf is the expert on this kind of stuff.
-
There are some engineers who think like that, but most understand that there are layers of complexity to what they're asking for. When someone tells me what they want and then slip in a, "that should only take a couple weeks," I'm very blunt in my response. "*You* define your feature list and priorities. *I* define the delivery date."
There are two LV programmers here now and another one starting on Monday. Our group has been the bottleneck at times in the past, primarily because of poor planning and unrealistic expectations. We're learning how to get better.
These days I put off almost all UI development until the end of the project and focus their attention on the core functionality they need. We've spent too much time in the past rewriting user interfaces to accomodate the latest change requests. I also make the test engineers directly responsible for what features get implemented. How do I do that? I develop in two week iterations. At the beginning of an iteration I sit down with the test engineers and their feature list and I ask them what features they want me to implement over the next two weeks? If I think I can do it, I agree. If not, they have to reduce their request. (Usually I have one or two features that are the primary goals and several items that are secondary goals. Sometimes I get to the secondary goals, sometimes not.)
At the end of the iteration I deliever a usable, stand-alone functional component with a very simple UI that does exactly what was requested, no more. They can use it immediately as part of the chain of individual tools I'm giving them every two weeks. Hopefully at the end we have time to wrap it all up in a larger application and make it easier to use, but if we don't they still have the ability to collect the data they need. If we hit their deadline and something isn't implemented, it's because of their decisions, not mine.
So far this is working out very well, but it's still very new (I'm the only one doing it) so we'll see how it goes over time. (I talked about this a little more in this recent thread on the dark side.)
Well. Doesn't sound too bad. 3 people should be able to support a number of production environments. You have a predictable time-scale for implementation that can be planned for and you use an iterative life cycle. Which one of your team came from production?
-
Thank you. It's good to be wanted.
I knew some manual processing was going to be needed. The question was asking how other people dealt with the problem I was having. Jon suggested using the Configure Logging vi on the task. COsiecki suggested retrieving unscaled data and getting the scaling coefficients from the task. You suggested implementing everything as a configurable ini. All viable solutions. All with different strengths. All appreciated.
Easier, no doubt about it. Regardless of whether the app is implemented using OOP or procedural code, if they have to come to me for any little change it causes delays in the entire product development process.
I call them "throw aways." Unfortunately that isn't acceptable in this situation. Here's a bit more of an explanation of my environment:
I write test tools for a product development group. Early in the product's life cycle the dev team is evaluating all sorts of alternatives to decide which route to take. The alternatives can be entirely different components, different types of components, different combinations, different algorithms, etc. Anything about the details of the design can change. This process can take months. The tool I give them for qualification testing needs to be as flexible as possible while providing solid data that is easily comparable. If I or the test engineer is constantly going in to tweak the tool's source code, config file, etc., then it allows project managers and design engineers to more easily question the validity of results they don't particularly like.
In this particular project, part of component qualification is comparing the ADC signals to signals received from other sources (I2C, wireless capture, vision system etc.) to cross check the data from the various components. It also requires coordinating the motion of complex exteral actuators with the data collection. Using the external actuators and parsing the other data streams is beyond the programming capabilities of the test engineers. (And they don't have time for it anyway.)
As the product's design becomes solidified that kind of flexibility is deprecated. Design focus shifts from component qualification to product performance. Test efficiency and consistency become much more important as multiple samples are run through tests to obtain statistically significant data. Unfortunately the shift between component qualification and product testing is very blurry. There is no "we're done with component qualification, now let's build tools for product testing." The code written for qualification testing becomes the code used for product testing.
I do take shortcuts where I can during the component qualification. UI's are very simple wrappers of the module's* underlying functional code and are often set up as single shots. (Press the run arrow to execute instead of a start and stop button.) There isn't a lot of integration between the various functional modules. Error handling probably isn't robust enough. But the functional code of each module shouldn't change once I've written it.
(*Some of the modules we have finished or have in the pipe are an I2C collection actor object, an analog signal collection state machine, a sequencer for the external actuators, a wireless collection module, a few analysis modules, etc. Currently each of these function independently. The intent is to combine them all into a single app later on in the project.)
I've been in the same boat many times. the real problem (as I saw it) was that I was (end-of-line) so once something was achieved it was then "how do we test it? Lets write some software!" It was re-active rather than pro-active development. After all "its ONLY software and that takes 2 minutes.... right? Unfortunately that kind of thinking takes a long time to change and is really the domain of a "Test Engineer" rather than a "Software Programmer" since a test engineer has detailed knowledge of the products and how to test them from a very early stage and is privy to spec changes very early on.
Sounds like "departmental expansion" is the route. You are the bottle-neck so you need resources to overcome it. Are you the only programmer?
-
Yes I know that there is an English ver. but no results!!=)
actually I'm not sure if it is ISA or not. but I plug it on the PCI slot!
ok, I'll read the doc and translate it for you!
I know that I have to make a DLL file by which tha labview can read the slot! but how to make it, really I do not have any idea!!
soon 'll send to you all the info-s for this board!
all my regards!
Can't you use an equivelent card that is beter supported?
-
this is the only info about this board and its sensor!
all by russian lang! and this is my model!! "АЦПВТ-14П-О2К"
http://www.rtkt.ru/c...s/datchiki.html
I know it is not easy to read! but can you help me to know how to get the PCI slot address and to read from it all the registers, PS. I've the address of the registers!! for that i ask how to make a program that read from the PCI slot an other data !! how to make a code by which i can in there!!? and after that how can i define other addresses from this one!!!(which are on the board)?
all my regards!
They have an English version (top right of the page) but a quick search didn't revel anything and without going through their entire catalogue.......
You can find the memory location and slot information from device manager. But that probably won't help much since under windows (you are using windows?) direct memory access to memory mapped IO is not possible without "Kernel Mode" drivers so it probably isn't even recognised. But I would be very surprised if it was memory mapped. You sure its not an ISA card?
-
As generality/stereotype....yes. But not because of what he probably thinks. Is there a full video?
-
Quick responses... gotta run soon.
We're crossing signals somewhere in here. If I do a Read that returns a waveform, the data is post-scaled, which I don't want to save due to size and I don't want to truncate due to lost precision. If I do a Read that returns the unscaled I16 data, then I have to manually apply the arbitrary scaling factors before displaying it.
Yes. Welcome to the real world
Me too. Except that it's their job to play with all those things to try and break the products. I can't restrict what I allow them to do, or they come back every other day and say, "now I want to be able to do..." My tool becomes the blocking issue, and that's bad mojo for me. (My tools are used by test engineers in a product development group. They need to be able to handle whatever screwball changes the design engineers conjured up last night at the bar. These are not manufacturing test tools with well defined sequential test processes.)
But OOP makes that easy, right?
Sorry. couldn't resist.
They probably need 5 minute tools (as I call them). Discardable software that doesn't come under formal control, is quick to implement (5-30 mins) and usually provided by one of the members in the department that "likes" programming. You have anyone like that?
As an example. One of our machines was playing up. We thought it was temperature related. So we decided to monitor the temperature. So I took one of the graph examples. Replaced the sig gen vi with a DAQ one and added a save to file. It took 5 mins max. I then compiled it, copied to the machine and pressed the run arrow (no fancy user interfaces, hard-coded DAQ channel) and we all went home. Next day, came in and analysed the file, found the fault , ran the logger again to check and once everything was fine, removed it. It wasn't part of the "real software". It wasn't meant to be re-used. It was just a quick knock-up tool to log data for that specific scenario.
-
So. He is saying the internal processing already accounted for the gain in the reading which you negated by including it in your calculation. Sounds familiar
-
Someone told me that in order to relate this digitized signal accuracy to the original signal, they divided it by the gain given by the SCXI-1120B module.
This is why there is a factor of a 100 difference between the two.
Is this correct?
Well. I'm no DSP expert. But that seems a bit simplistic, possibly a rule-of-thumb?. What do they mean by digitised signal accuracy? If you mention accuracy to me I think in terms of a compound of additive errors (as you can see from your calculation example which is derived in terms of temperature, reading an offset). I'm aware of aperture, quantization and clock errors for ADCs. Possibly he/she is referring to those in a general way. But those are mainly measured in bits rather than voltage, so it depends on your range rather than gain.
What exactly are you trying to get to? You have the measurement accuracy of your system. You have the codeword size. These I can understand would be important to you for measuring temperature. Are you trying to break down the accuracy into every single error contributor in the system ? If so. this could be a very, very long thread
-
OK! I am trying to calculate the overall uncertainty of DAQ System. I know about the accuracy calculator in the ni.com and I am trying to check my results using this tool.
So the calculation formula is: Absolute Accuracy = +/-[(Input Voltage x % of Reading)+ Offset +System Noise +Temperature Drift]
In my measurements there is a temp drift, from 29-31°C, but the above formula is for enviroments form15-35°C. So the temperature drift is zero.
At first I set to the accuracy calculator: DAQ device to PCI-6034E and SCXI module to none. Then I set for example 0.001203V and Average of 100 readings.The result is the following:
According to the E Series User Manual (page 38 gain+resolution PCI-6034E) Absolute Accuracy=0,001203*0,0588/100+ (28,9 +2,75)*10-6=0,03235mV which the same with the accuracy calculator.
When I I set to the accuracy calculator: DAQ device to PCI-6034E and SCXI module SCXI-1102B, I do not get the same results about the absolute accuracy of the DAQ device.
Accuracy Calculator:
Me: DAQ Absolute Accuracy=0,1203*0,0588/100+ (100 +5,04)*10-6=0,175mV which I a very large number. I assume that the input value is 0,1203 because the gain of the scxi 1120b is 100.
What I am doing wrong?
Sorry my posts are a bit tiresome!!
Your input value is still 0.001203. Gain is not included in this calculation, only the reading which already has the gain applied by the internal processing of the device. This is a "black-box" calculation. Subsequently your calculated value is in error by a factor of 100.
-
I haven't read this post and the emails too closely, but basically, the guy who develops TortoiseSVN says that PayPal freezed his account due to needing to comply with new regulations. I don't know if LAVA falls under the same category, but it's something to look into.
Nothing unusual there. Paypal have some very questionable if not downright illegal policies.
-
Here's fine.
-
Thank you for your answer ShaunR,The resolution of PCI-6034E is 16 bit. I know the formula you mentioned.What I really want to know is how the gain is applied.For example i assume I take a temperature measurement at 30 oC. This temp equals with 1,203mV.This input signal enters the SCXI. The SCXI uses the range ±100 mV (gain 100).So the signal will be 1,203mV*100=120,3mV ???
Then the signal is amplified in the DAQ board. If the above is correct then the DAQ board will use the ±500mV (gain 10)Again the signal will be 10 times amplified 120,3*10=1203mV=1,203V?The code width with the above setting will be 1V/216=15,26uV
1/(65356 x 10) = 1.53E-6 uv so yes. the codeword is correct.
Assuming that I want to calculate what will be the minimun temperature that the DAQ board will detect.For K thermocouples 1oC is 40uV. So if I have ΔΤ=1oC according to the above amplification the 40uV will be amplified to 40mV.So 15,26 (uV)/40 (mV/oC)=0,3815*10-3 oCNot quite. Thermocouples are non-linear. The K-type is especially wobbly around 0°C. You need to use thermocouple tables (or polynomial approximations) to calculate the temperature for a particular voltage. But for the K type your analysis is correct but only for that temperature. Don't assume that it will remain at that as you increase and decrease in temperature.
Please can you verify if the steps of amplifications are correct because I feel that 0,3815*10-3 oC is a very small temperature to detect.Thank you very much ShaunR for your time. Giorgos PThermocouples produce very small voltages. You can see this from your Thermocouple range (1.4-.1.2)/ 30 = 0.04 mv/°C. This is why they use characterisation tables rather than an approximation. Its very important to minimise errors and introduce compensation if possible if you are looking for accuracy. Take a long hard look at your hardware spec (noise, temperature stability etc) and make sure it is capable
-
Manufacturer? Model? Type?
-
Hallo,
I am using K thermocouples and I want to measure temperatures between 30℃ and 60℃ which corresponds to 1,203 – 2,436mV voltage range. I use the following Data Acquisition system:
- Connector block SCXI-1102B
- Controller PCI-6034E
The specification of the above are:
SCXI-1102B has 2 possible input signal ranges: ±10V (gain=1) and ±0,1V (gain=100)
PCI-6034E has 4 possible input signal ranges: ±10V (equals gain=0,5), ±5V (equals gain=1), ±0,5V (equals gain=10) and ±0,05V (equals gain=100)
I want to know how each device amplifies the signal and what will be the total amplification of the signal, in order to calculate the code width.
Thank you very much,
Giorgos P
You already have most the information to calculate the code width. Look at the spec for your device again and find the resolution.
code width = range/(gain x resolution)
-
You're lucky I can't take them back.
Touche
That diagram doesn't do what I need. I need to keep every bit of data from the 16-bit ADC; your diagram is going to lose precision. I could avoid that by having them enter all the scaling information in the ini file... except I'm collecting data from m sensors simultaneously and there are n possible sensors to choose from to connect. On top of that any sensor can potentially be hooked up to any terminal. And they'll need to be able to add new sensors whenever they get one, without me changing the code. Oh yeah, it has to be easy to use.
Can all this be done with an ini? Sure, but the bookkeeping is likely to get a bit messy and editing an ini file directly to control the terminal-channel-scale-sensor mapping is somewhat more error prone than setting them in Max. Implementing a UI that allows them to do that is going to take dev time I don't have right now, and since Max already does it I'm not too keen on reinventing
the wheel.
I don't think your technique is bad--heck I'm ALL for making modular and portable code whenever I can. This is one bit of functionality where I need to give up the "right" way in favor of the "right now" way.
It was just to show branching. What the numbers are is irrelevant. that's why I don't understand your difficulty with reading one value and showing another. I could just as easily read an int and displayed a dble.
But anyway.......
Just saving the ADC won't give you more precision. In fact, the last bit (or more) is probably noise. Its the post processing that gives a more accurate reading. You usually gain 1/2 a bit of precision and with post-processing like interpolation and averaging, significant improvements can be made (this is quite a good primer). What s the obsession with saving the ADC?
Now. From your n and m descriptions, I'm assuming you're thinking nxm configurations (is that right?). But. You don't care what the sensor is only that it has an analogue output which you can measure. You can't log data from nxm devices simultaneously because you only have m channels. So you only have to configure m channels (or the engineers do at least).. If you allow them to make a new task every-time they change something, the list of tasks in MAX very quickly becomes un-manageable. We use 192 digital IOs for example. Can you imagine going through MAX and and creating a Task for each one?
What you are describing is a similar problem we have with part numbers. Its a management issue rather than a programming one. We (for example) may have 50 different part numbers, all with different test criteria (different voltage/current measurements, excitation voltages, pass-fail criteria etc, etc). But they all use the same hardware of course, otherwise we couldn't measure it.
So the issue becomes how can we manage lots of different settings for the same hardware. Well. One way is a directory structure where each directory is named with the part number and contains any files required by the software (camera settings, OCR training files, DAQ settings, ini-files, pass/fail criteria....maybe 1 file, maybe many). The software only needs to read the directory names and hey presto! Drop down list of supported devices. New device? New directory. You can either copy the files from another directory and modify, or create a fancy UI that basically does the same thing. Need back-ups? Zip the lot
Need change tracking? SVN!
Another is a database which takes a bit more effort to interface too (some think its worth it), but the back-end for actually applying the settings is identical. And once you've implemented it you can do either just by using a case statement.
What you will find with the NI products, is that really there are't that many settings to change. Maybe between current loop/voltage and maybe the max/min and you will be able to measure probably 99% of analogue devices. Do they really need to change from a measurement of 0-1V when 0-5v will give near enough the same figures (do they need uV accuracy?) Or will mV do! Don't ask them, you know what the answer will be
). Do we really need to set a 4-20ma current loop when we can use 0-20 (its only an offset start point after all.).
Heh... they're test engineers in a product development group. They play with everything you can imagine related to the product. (Read: They want infinitely flexibile test apps.) But they don't play with the data. That better be rock solid.
Indeed. And I would much rather spend my programming time making sure they can play with as little as possible, because when they bugger it up, your software will be at fault
You'll then spend the next week defending it before they finally admit that maybe they did select the wrong task
-
I'm using Firefox. I configure the output of the web service like a XML file, and I receive it well in my browser, but when I add some XML additional tags, the special characters of those tags were changed, so when i try to parse the XML response with javascript, it didn't recognize the < character like the < character.
Use the javascript "html_entity_decode" function.
html_entity_decode(string)
Normal chars will remain unaffected but &alt etc will be converted.
Damn. Now I'm a text heretic
-
I appreciate all the responses. Kudos for everyone!
Wrong site. It's rep-points here
Well yeah, but show what on the graph? Users want to see scaled data; I want to store store raw data. In a nutshell the question was about manually transforming between raw, unscaled, and scaled data.
That's what I mean. These are mutually exclusive?
Does this mean I can't create separate tasks in Max that all use the same physical channels, even though I'll only be using one task at a time?
Hmm... this could be a problem over the long term. We're going to be using a single data collection computer to measure signals from different sensors, depending on the test being done. I had planned on having the test engineers use Max to set up new tasks, channels, scales, etc. and select the correct task in the test app. But if that's not possible I'll have to create my own interface for that. (Ugh...)
Yes of course you can. But it depends if its the horse driving the cart or the other way round.
As soon as you start putting code in that needs to read MAXs config so you know how to interpret the results, you might as well just make it a text file that they can edit in notepad or spreadsheet program and when you load it you already have all the information you need without having to read it all from MAX. Otherwise you have to first find out what tasks there are and depending on what has been defined (digital AI AO?), put switches in your code to handle the properties of the channels.However if you create the channels on the fly, you don't need to do all that. It also has the beneficial side effect that if you can do things like switch from a "read file.vi" to a a "read Database" vi (oops. I meant Read Config Class
) with little effort.
However, if they are just "playing" then you are better off telling them to use the "Panels" in MAX.
-
I definitely don't have access to such a system/IP as you described.
Only 'cos you haven't written it......yet
-
Hi..I'm in the process of identification on process pressure rig 38-714 using Modbus on process controller 38-300 from feedback . I want to know is how the setting and addressing the process control and process variables on the process controller 38-300.
I previously had tried to use the NI OPC server and I get a value of 64 537 for Process variable and the value is not changed even though I have changed the parameters of the plant. any suggestions?
All devices have different address maps for the PV. You will need to read the manual.
-
Hi all,
Is it possible to read the content from a secure webpage (HTTPS) with password using LabVIEW? How the format of the URL should be?
Thanks in advance,
Suresh Kumar.G
Only in Labview 2010. The format is https:// and "verify" must be set to "TRUE"
-
A lot of the information is stored in the registry. So a quick and dirty way would be to find it there.
-
Sorry, no helpful advice other than put off that goal for another couple years, or maybe forever if you can. (It is a noble goal--it just doesn't look very achievable right now)
Or un-check the "show warnings" when viewing this library
Interestingly. If you change something (like mechanical action or the unbundle names). The warnings disappear............until you save it
Think you may have found a feature.
Identifying code width in DAQ system
in Hardware
Posted
I fail to see where in
Absolute Accuracy = +/-[(Input Voltage x % of Reading)+ Offset +System Noise +Temperature Drift]
gain is used since it is a sub-component of "Reading".
I took your word on the 100+5.14 since I didn't have that info (neither could I find the 28,9 +2,75 in the spec pages you pointed me to (which is where the 70mv lies) if that is the "system noise" and offset) . But it was glaring obvious that 0.1203 was incorrect. Perhaps I should have said "about 100"
But you have an answer you are happy with so that's good.