Jump to content

ShaunR

Members
  • Posts

    4,849
  • Joined

  • Days Won

    292

Posts posted by ShaunR

  1. Hi ShaunR,

    The programs was work perfectly. But when I press stop button in the client side, the programs was stop running. I put a while loop over the client diagram to make it able to play again after it stop but it doesnt work. (what i'm trying to do is client side able to play and display the image from the server side even its stopped, so it able to play over and over during system running)

    Do you know how?

    Take a look at Data Client.vi and Data Server.vi in the NI examples.

    • Like 1
  2. Need you ask? ;) I've held positions across nearly all the stages of a product's life cycle: Conception, development, transition to manufacturing, manufacturing sustaining, etc. The only stage I haven't been involved in is end-of-life.

    Well. You never know. Its a bit like mathematicians. there are pure mathematicians and applied mathematicians. Pure mathematicians are more interested in the elegance of arriving at a solution whereas applied mathematicians are more interested in what the solution can provide.

    Well you've got the control and the expertise. But maybe not the tool-kit that comes from programming in those positions wink.gif

    But back to MAX. I (and the engineers that use the file system) just find it much quicker and easier to maintain and modify. Like I said. We have lots of IO (analogue and digital) and find MAX tedious and time-consuming.A single excel spreadsheet for the whole system is much easier. And when we move to another project we don't have to change any configuration code, just the spreadsheet which can be done by anyone more or less straight from the design spec (if there is one wink.gif ).

    But you know your processes. A man of your calibre I'm sure will look at the possible alternatives and choose one that not only fixes the problem now, but is scalable and will (with a small hammer) fit the tomorrow.biggrin.gif

  3. Hi ShaunR, I have a question regarding these VIs.

    I want to put play and stop button at the client side.

    So when I press play, the camera show the images and when press stop, camera stop receiving the images from the server. And that case may loop over and over again.

    Is it possible to implement?

    Yes. Take a look at Data Client.vi and Data Server.vi in the NI examples.

  4. Take a look at Data Client.vi and Data Server.vi in the NI examples.

    It uses 1 channel. The client sends back the letter Q to the server (on the same connection) to stop the server sending data.

    Oh. And you can get the IP address by using "IpTostr" and "StrToIP" instead of executing IPconfig and formatting the result.

    (I'd post a picture, but for some reason image uploading is failing)

    Weird. Upload fails if I change from quick to full edit. But straight reply is fine.

    • Like 1
  5. Take a look at Data Client.vi and Data Server.vi in the NI examples.

    It uses 1 channel. The client sends back the letter Q to the server (on the same connection) to stop the server sending data.

    Oh. And you can get the IP address by using "IpTostr" and "StrToIP" instead of executing IPconfig and formatting the result.

    (I'd post a picture, but for some reason image uploading is failing)

    • Like 1
  6. Sorry my friend but I have to disagree with you. I believe what you are saying is different from what he suggested.

    From what I understand you suggest not to take into account the gain in this calculation, but take into account the reading of this gain. I believe this is wrong for two reasons:

    1. If you take a closer look at the two images that I have posted, you will see that in the first one the range in the DAQ board is ±0.05 (gain=100) while in the second one the range is ±0.5 (gain=10), which means that the accuracy calculator takes into account the gain in the calculation.
    2. The calculation 0.001203*0,0588/100+ (100 +5,04)*10-6=0.1057mV (which you told me) is different from the result of the accuracy calculator (0.0018mV) if the input value is as you suggested below
      So the calculation you suggested me would not be in error by a factor of 100, but it would be decreased by 70mV from what I had calculated.

    Anyway, I have to agree with you on the thing that this is a "black box"

    thank you ShauR for you thorough answers, you helped me a lot

    I fail to see where in

    Absolute Accuracy = +/-[(Input Voltage x % of Reading)+ Offset +System Noise +Temperature Drift]

    gain is used since it is a sub-component of "Reading".

    I took your word on the 100+5.14 since I didn't have that info (neither could I find the 28,9 +2,75 in the spec pages you pointed me to (which is where the 70mv lies) if that is the "system noise" and offset) . But it was glaring obvious that 0.1203 was incorrect. Perhaps I should have said "about 100"

    But you have an answer you are happy with so that's good.

  7. I have never understood those people who always go for the cheapest DAQ just because its the cheapest.

    IMHO, the most powerful thing about NI is not their hardware and not LabVIEW (don't get me wrong these are both fantastic), its their Drivers - the connection between the IDE and the hardware.

    Again, IMHO, for the extra cost upfront of going with more expensive hardware, you are going to save a bucket load of time (and therefore money) by having a reliable Driver set e.g. standardization/familiarity, support, flexibility, upgrades etc...

    So a lot of the time (for what we do) it makes sense to choose NI. (However, we are also an Alliance partner, so we are bias)

    :)

    You are quite right. It is the synergy between their hardware and the software (sometimes we forget Labwindows) that makes them the obvious choice. And one of the main reasons Labview is as successful as it is is because. It turns a software engineer into a systems engineer (much more useful biggrin.gif ) However, if all you need is a dumb remote analogue or digital device then the cost of cRIO or field-point cannot be justified ($2000-$4000) against a $200 ethernet device from another well known manufacturer.

    But having said that, I think it has more to do with confidence and experience than anything else.I am comfortable interfacing to anything in any language (but I will fight like buggery to use Labview laugh.gif ). If someone has only used labview and only knows labview products, then its a low risk, sure bet..

  8. The most common cause (I've found) of this behaviour is that memory allocated from the Labview (i.e outside the DLL) is freed inside the DLL. When the function returns, the original pointer Labview used for allocation is non-existent.. If the DLL does not return an error exit code, Labview assumes everything was ok and attempts to use it again (i think). A normal app would show you a GPF, but Labview is a bit more robust than that (usually) and normally gives an error. But it depends how catastrophic it was.

    You probably need exception handling for your dll, so that any GPF's or nasty C stuff, that breaks your DLL, still cleanly returns to labview. this is usually done in the DLL_PROCESS_DETACH of DllMain. This will mean that at least Labview will stay around for you to debug the DLL to find the root cause.

    However. If the error affects the program pointer. Then nothing short of fixing the problem will suffice.

    Rolf is the expert on this kind of stuff.

  9. There are some engineers who think like that, but most understand that there are layers of complexity to what they're asking for. When someone tells me what they want and then slip in a, "that should only take a couple weeks," I'm very blunt in my response. "*You* define your feature list and priorities. *I* define the delivery date."

    There are two LV programmers here now and another one starting on Monday. Our group has been the bottleneck at times in the past, primarily because of poor planning and unrealistic expectations. We're learning how to get better.

    These days I put off almost all UI development until the end of the project and focus their attention on the core functionality they need. We've spent too much time in the past rewriting user interfaces to accomodate the latest change requests. I also make the test engineers directly responsible for what features get implemented. How do I do that? I develop in two week iterations. At the beginning of an iteration I sit down with the test engineers and their feature list and I ask them what features they want me to implement over the next two weeks? If I think I can do it, I agree. If not, they have to reduce their request. (Usually I have one or two features that are the primary goals and several items that are secondary goals. Sometimes I get to the secondary goals, sometimes not.)

    At the end of the iteration I deliever a usable, stand-alone functional component with a very simple UI that does exactly what was requested, no more. They can use it immediately as part of the chain of individual tools I'm giving them every two weeks. Hopefully at the end we have time to wrap it all up in a larger application and make it easier to use, but if we don't they still have the ability to collect the data they need. If we hit their deadline and something isn't implemented, it's because of their decisions, not mine.

    So far this is working out very well, but it's still very new (I'm the only one doing it) so we'll see how it goes over time. (I talked about this a little more in this recent thread on the dark side.)

    Well. Doesn't sound too bad. 3 people should be able to support a number of production environments. You have a predictable time-scale for implementation that can be planned for and you use an iterative life cycle. Which one of your team came from production?

  10. Thank you. It's good to be wanted.

    I knew some manual processing was going to be needed. The question was asking how other people dealt with the problem I was having. Jon suggested using the Configure Logging vi on the task. COsiecki suggested retrieving unscaled data and getting the scaling coefficients from the task. You suggested implementing everything as a configurable ini. All viable solutions. All with different strengths. All appreciated.

    Easier, no doubt about it. Regardless of whether the app is implemented using OOP or procedural code, if they have to come to me for any little change it causes delays in the entire product development process.

    I call them "throw aways." Unfortunately that isn't acceptable in this situation. Here's a bit more of an explanation of my environment:

    I write test tools for a product development group. Early in the product's life cycle the dev team is evaluating all sorts of alternatives to decide which route to take. The alternatives can be entirely different components, different types of components, different combinations, different algorithms, etc. Anything about the details of the design can change. This process can take months. The tool I give them for qualification testing needs to be as flexible as possible while providing solid data that is easily comparable. If I or the test engineer is constantly going in to tweak the tool's source code, config file, etc., then it allows project managers and design engineers to more easily question the validity of results they don't particularly like.

    In this particular project, part of component qualification is comparing the ADC signals to signals received from other sources (I2C, wireless capture, vision system etc.) to cross check the data from the various components. It also requires coordinating the motion of complex exteral actuators with the data collection. Using the external actuators and parsing the other data streams is beyond the programming capabilities of the test engineers. (And they don't have time for it anyway.)

    As the product's design becomes solidified that kind of flexibility is deprecated. Design focus shifts from component qualification to product performance. Test efficiency and consistency become much more important as multiple samples are run through tests to obtain statistically significant data. Unfortunately the shift between component qualification and product testing is very blurry. There is no "we're done with component qualification, now let's build tools for product testing." The code written for qualification testing becomes the code used for product testing.

    I do take shortcuts where I can during the component qualification. UI's are very simple wrappers of the module's* underlying functional code and are often set up as single shots. (Press the run arrow to execute instead of a start and stop button.) There isn't a lot of integration between the various functional modules. Error handling probably isn't robust enough. But the functional code of each module shouldn't change once I've written it.

    (*Some of the modules we have finished or have in the pipe are an I2C collection actor object, an analog signal collection state machine, a sequencer for the external actuators, a wireless collection module, a few analysis modules, etc. Currently each of these function independently. The intent is to combine them all into a single app later on in the project.)

    I've been in the same boat many times. the real problem (as I saw it) was that I was (end-of-line) so once something was achieved it was then "how do we test it? Lets write some software!" It was re-active rather than pro-active development. After all "its ONLY software and that takes 2 minutes.... right? Unfortunately that kind of thinking takes a long time to change and is really the domain of a "Test Engineer" rather than a "Software Programmer" since a test engineer has detailed knowledge of the products and how to test them from a very early stage and is privy to spec changes very early on.

    Sounds like "departmental expansion" is the route. You are the bottle-neck so you need resources to overcome it. Are you the only programmer?

  11. Yes I know that there is an English ver. but no results!!=)

    actually I'm not sure if it is ISA or not. but I plug it on the PCI slot!

    ok, I'll read the doc and translate it for you!

    I know that I have to make a DLL file by which tha labview can read the slot! but how to make it, really I do not have any idea!!

    soon 'll send to you all the info-s for this board!

    all my regards!

    Can't you use an equivelent card that is beter supported?

  12. this is the only info about this board and its sensor!

    all by russian lang! and this is my model!! "АЦПВТ-14П-О2К"

    http://www.rtkt.ru/c...s/datchiki.html

    I know it is not easy to read! but can you help me to know how to get the PCI slot address and to read from it all the registers, PS. I've the address of the registers!! for that i ask how to make a program that read from the PCI slot an other data !! how to make a code by which i can in there!!? and after that how can i define other addresses from this one!!!(which are on the board)?

    all my regards!

    They have an English version (top right of the page) but a quick search didn't revel anything and without going through their entire catalogue.......

    You can find the memory location and slot information from device manager. But that probably won't help much since under windows (you are using windows?) direct memory access to memory mapped IO is not possible without "Kernel Mode" drivers so it probably isn't even recognised. But I would be very surprised if it was memory mapped. You sure its not an ISA card?

  13. Quick responses... gotta run soon.

    We're crossing signals somewhere in here. If I do a Read that returns a waveform, the data is post-scaled, which I don't want to save due to size and I don't want to truncate due to lost precision. If I do a Read that returns the unscaled I16 data, then I have to manually apply the arbitrary scaling factors before displaying it.

    Yes. Welcome to the real world laugh.gif

    Me too. Except that it's their job to play with all those things to try and break the products. I can't restrict what I allow them to do, or they come back every other day and say, "now I want to be able to do..." My tool becomes the blocking issue, and that's bad mojo for me. (My tools are used by test engineers in a product development group. They need to be able to handle whatever screwball changes the design engineers conjured up last night at the bar. These are not manufacturing test tools with well defined sequential test processes.)

    But OOP makes that easy, right? biggrin.gif Sorry. couldn't resist. rolleyes.gif

    They probably need 5 minute tools (as I call them). Discardable software that doesn't come under formal control, is quick to implement (5-30 mins) and usually provided by one of the members in the department that "likes" programming. You have anyone like that?

    As an example. One of our machines was playing up. We thought it was temperature related. So we decided to monitor the temperature. So I took one of the graph examples. Replaced the sig gen vi with a DAQ one and added a save to file. It took 5 mins max. I then compiled it, copied to the machine and pressed the run arrow (no fancy user interfaces, hard-coded DAQ channel) and we all went home. Next day, came in and analysed the file, found the fault , ran the logger again to check and once everything was fine, removed it. It wasn't part of the "real software". It wasn't meant to be re-used. It was just a quick knock-up tool to log data for that specific scenario.

  14. Someone told me that in order to relate this digitized signal accuracy to the original signal, they divided it by the gain given by the SCXI-1120B module.

    This is why there is a factor of a 100 difference between the two.

    Is this correct?

    Well. I'm no DSP expert. But that seems a bit simplistic, possibly a rule-of-thumb?. What do they mean by digitised signal accuracy? If you mention accuracy to me I think in terms of a compound of additive errors (as you can see from your calculation example which is derived in terms of temperature, reading an offset). I'm aware of aperture, quantization and clock errors for ADCs. Possibly he/she is referring to those in a general way. But those are mainly measured in bits rather than voltage, so it depends on your range rather than gain.

    What exactly are you trying to get to? You have the measurement accuracy of your system. You have the codeword size. These I can understand would be important to you for measuring temperature. Are you trying to break down the accuracy into every single error contributor in the system ? If so. this could be a very, very long thread biggrin.gif

  15. OK! I am trying to calculate the overall uncertainty of DAQ System. I know about the accuracy calculator in the ni.com and I am trying to check my results using this tool.

    So the calculation formula is: Absolute Accuracy = +/-[(Input Voltage x % of Reading)+ Offset +System Noise +Temperature Drift]

    In my measurements there is a temp drift, from 29-31°C, but the above formula is for enviroments form15-35°C. So the temperature drift is zero.

    At first I set to the accuracy calculator: DAQ device to PCI-6034E and SCXI module to none. Then I set for example 0.001203V and Average of 100 readings.The result is the following:

    originalmw.jpg

    According to the E Series User Manual (page 38 gain+resolution PCI-6034E) Absolute Accuracy=0,001203*0,0588/100+ (28,9 +2,75)*10-6=0,03235mV which the same with the accuracy calculator.

    When I I set to the accuracy calculator: DAQ device to PCI-6034E and SCXI module SCXI-1102B, I do not get the same results about the absolute accuracy of the DAQ device.

    Accuracy Calculator:

    original2.jpg

    Me: DAQ Absolute Accuracy=0,1203*0,0588/100+ (100 +5,04)*10-6=0,175mV which I a very large number. I assume that the input value is 0,1203 because the gain of the scxi 1120b is 100.

    What I am doing wrong?

    Sorry my posts are a bit tiresome!!

    Your input value is still 0.001203. Gain is not included in this calculation, only the reading which already has the gain applied by the internal processing of the device. This is a "black-box" calculation. Subsequently your calculated value is in error by a factor of 100.

  16. Thank you for your answer ShaunR,The resolution of PCI-6034E is 16 bit. I know the formula you mentioned.What I really want to know is how the gain is applied.For example i assume I take a temperature measurement at 30 oC. This temp equals with 1,203mV.This input signal enters the SCXI. The SCXI uses the range ±100 mV (gain 100).So the signal will be 1,203mV*100=120,3mV ???

    Then the signal is amplified in the DAQ board. If the above is correct then the DAQ board will use the ±500mV (gain 10)Again the signal will be 10 times amplified 120,3*10=1203mV=1,203V?The code width with the above setting will be 1V/216=15,26uV

    1/(65356 x 10) = 1.53E-6 uv so yes. the codeword is correct.

    Assuming that I want to calculate what will be the minimun temperature that the DAQ board will detect.For K thermocouples 1oC is 40uV. So if I have ΔΤ=1oC according to the above amplification the 40uV will be amplified to 40mV.So 15,26 (uV)/40 (mV/oC)=0,3815*10-3 oC

    Not quite. Thermocouples are non-linear. The K-type is especially wobbly around 0°C. You need to use thermocouple tables (or polynomial approximations) to calculate the temperature for a particular voltage. But for the K type your analysis is correct but only for that temperature. Don't assume that it will remain at that as you increase and decrease in temperature.

    Please can you verify if the steps of amplifications are correct because I feel that 0,3815*10-3 oC is a very small temperature to detect.Thank you very much ShaunR for your time. Giorgos P

    Thermocouples produce very small voltages. You can see this from your Thermocouple range (1.4-.1.2)/ 30 = 0.04 mv/°C. This is why they use characterisation tables rather than an approximation. Its very important to minimise errors and introduce compensation if possible if you are looking for accuracy. Take a long hard look at your hardware spec (noise, temperature stability etc) and make sure it is capable

  17. Hallo,

    I am using K thermocouples and I want to measure temperatures between 30℃ and 60℃ which corresponds to 1,203 – 2,436mV voltage range. I use the following Data Acquisition system:

    • Connector block SCXI-1102B
    • Controller PCI-6034E

    The specification of the above are:

    SCXI-1102B has 2 possible input signal ranges: ±10V (gain=1) and ±0,1V (gain=100)

    PCI-6034E has 4 possible input signal ranges: ±10V (equals gain=0,5), ±5V (equals gain=1), ±0,5V (equals gain=10) and ±0,05V (equals gain=100)

    I want to know how each device amplifies the signal and what will be the total amplification of the signal, in order to calculate the code width.

    Thank you very much,

    Giorgos P

    You already have most the information to calculate the code width. Look at the spec for your device again and find the resolution.

    code width = range/(gain x resolution)

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.