Jump to content

ShaunR

Members
  • Posts

    4,881
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. I think most people use some sort of Queue based error handling where each loop/subvi/process/task places a message on the queue which is handled by a dedicated error task.
  2. Not sure if this is what Yair was talking about (didn't really understand) but.....You can also load a sub-panel within a sub-panel and instead of overlapping, nest them vertically. This might achieve the same effect,, but you would handle the event in the top level vi by inspecting the ctrl reference.
  3. Why are you using polymorphic VIs? They are data bound. Shouldn't you be using dynamic dispatch or some other dark magic to choose the instance at run-time?
  4. Seems this is a known problem that was fixed in later updates (allegedly). Load Error code 3
  5. Generate a user event and handle that in the sub.vi? Haven't tried it, but it'd probably be one of the first things I'd try.
  6. I thought a bit about this. Below is an example of a simple "possible" DAQ config file. One thing you can do to poke-yoke the file is have a "default" button which reloads (but does not save) the original config you decide on. That way they can always get back to a known "good" config and have "commit" and "Save" buttons, One which is temporary and will not be retained between launches of the software but allows them to "play", the other saves over the previous file. You can also do other stuff like having excel macros or create an interface for entry checking etc, but its not really necessary. Its really up to you. Its very flexible and scalable.
  7. I'm not sure about that. From the last time I used MAX. It was a case that in the project you create a MAX database file which is deployed with the installation (under the build specifications >>New installer>>Hardware Configuration). If that is the way you are thinking, then your "default" will only be applied every time you install as well as deleting any changes or additional tasks. Additionally. Once it is in MAX, I'm unaware of a method to "lock" a task so that it cannot be edited (jump in here JG) . However, if you create that task dynamically (delete if exists, then add it again) every time you run your software, you will have a task that can be reset to default just by re-running your program (or by pressing a button). And if you do that you have the major component of the file system/database implementation This bit, I think, will cause them to moan quite a lot as well as being extremely error prone. If you had a way to "copy" the default then I don't think it would be so bad, but I'm unaware of a way to do that in MAX. Well. You could update the scales directly from the spec sheet (or an automated derivative) to make yours and their lives easier. No abort button ? What I meant was actually covered in your previous description, where they have to create a new task. Indeed. Your application is relying on the most error prone part of the process (configuring MAX). This is what worries me. But I'm not sure what module you would want to write to configure DAQmx, since the whole purpose behind using MAX is so that you don't have to is it not?
  8. Cool project. Be interesting to hear about what you come up with. I don't know much but them, but the things I do now about scanners is that they have on-board controllers. But, the motor is usually a stepper motor where each step is defined by the Y-axis resolution. A 300x300 dpi scanner means that the motor is stepped 1/300th of an inch for example (assuming a letter sized page). I suppose the worse case might be that you just take out the drive and axis and control it directly although I don't know what the interface is but it wouldn't take you long to figure it out.
  9. I'm not sure what you were reading on the ni website. But I think you'll find you may need a wireless router. If you are using windows 7 you can turn your laptop into one by using this
  10. Indeed. So Lets say you use MAX. They create 24 "tasks", set up values for scaling and calibrate each channel (probaly a good 1/2 days work). Then they want to change something on one of the tasks. Do they modify the original task? Or do they create a new task, set up the new scales and re-calibrate on the premiss that they don't want to "lose" the old settings because they might come back to it?.So now we may have 48 tasks Lets say they they keep to 24 tasks. Then they come to you and say "right, we want a piece of software that logs tasks 1,3,5,and 9 except on Wednesday, when we'll be using 1,6,12,8". How do you store that information in MAX? That's up to you. You'r the only one that knows what tests and what's required. I think what you will find (personally) is that you start off using MAX then as things progress you will find more and more you need to have more external control until you reach a point where you have so much code just to get get around MAX that it is no longer useful and, in fact, becomes a hindrance. But by that time you are committed. Thats just my personal experience and others may find it different. We actually use several files. One for Cal data, one for general settings (graph colours, user preferences etc, etc), one for each camera (there can be up to 5), one for DAQ (basic config) once for drive config and one for test criteria. The operator just selects a part number (or a set of tests if you like) from a drop down list and can either run full auto, or run specific test from another drop down list filtered for that product (well. Not filtered since it it just showing the labels in the test criteria file ). But having a directory structure makes that really easy, since all it is doing is selecting a set of files.I think that type of interface would be a bit further down your life-cycle. But the building blocks started out just as you are currently doing and all that we did was put them altogether in a nice fancy user interface (it even uses sub-panels to show some of the original UIs we created when testing the subsystems).
  11. I think you are just intimidated by the fact you have not used it before. 30 minutes and a few examples (there are lots) with a usb DAQ should be enough. You will quickly discover its not really that different from using VISA, TCPIP IMAQ or any other "open, do something close". Heck. Even use the express VIs and you will have most of the functionality of MAX.
  12. Couldn't agree more. There's nothing more annoying to me when I see a piece of code that I'm interested in to find out, I have to download VIPM then the RCF and also install 5 other openG libraries that I neither want or use. I wonder how many people actually read all the licensing and actually do distribute the source code, licensing and associated files when they build an application with 3rd party tools? (not necessarily openG) Might be a good poll
  13. Take a look at Data Client.vi and Data Server.vi in the NI examples.
  14. Well. You never know. Its a bit like mathematicians. there are pure mathematicians and applied mathematicians. Pure mathematicians are more interested in the elegance of arriving at a solution whereas applied mathematicians are more interested in what the solution can provide. Well you've got the control and the expertise. But maybe not the tool-kit that comes from programming in those positions But back to MAX. I (and the engineers that use the file system) just find it much quicker and easier to maintain and modify. Like I said. We have lots of IO (analogue and digital) and find MAX tedious and time-consuming.A single excel spreadsheet for the whole system is much easier. And when we move to another project we don't have to change any configuration code, just the spreadsheet which can be done by anyone more or less straight from the design spec (if there is one ). But you know your processes. A man of your calibre I'm sure will look at the possible alternatives and choose one that not only fixes the problem now, but is scalable and will (with a small hammer) fit the tomorrow.
  15. Yes. Take a look at Data Client.vi and Data Server.vi in the NI examples.
  16. If it was in Oz I probably would have
  17. Weird. Upload fails if I change from quick to full edit. But straight reply is fine.
  18. Take a look at Data Client.vi and Data Server.vi in the NI examples. It uses 1 channel. The client sends back the letter Q to the server (on the same connection) to stop the server sending data. Oh. And you can get the IP address by using "IpTostr" and "StrToIP" instead of executing IPconfig and formatting the result. (I'd post a picture, but for some reason image uploading is failing)
  19. I fail to see where in Absolute Accuracy = +/-[(Input Voltage x % of Reading)+ Offset +System Noise +Temperature Drift] gain is used since it is a sub-component of "Reading". I took your word on the 100+5.14 since I didn't have that info (neither could I find the 28,9 +2,75 in the spec pages you pointed me to (which is where the 70mv lies) if that is the "system noise" and offset) . But it was glaring obvious that 0.1203 was incorrect. Perhaps I should have said "about 100" But you have an answer you are happy with so that's good.
  20. You are quite right. It is the synergy between their hardware and the software (sometimes we forget Labwindows) that makes them the obvious choice. And one of the main reasons Labview is as successful as it is is because. It turns a software engineer into a systems engineer (much more useful ) However, if all you need is a dumb remote analogue or digital device then the cost of cRIO or field-point cannot be justified ($2000-$4000) against a $200 ethernet device from another well known manufacturer. But having said that, I think it has more to do with confidence and experience than anything else.I am comfortable interfacing to anything in any language (but I will fight like buggery to use Labview ). If someone has only used labview and only knows labview products, then its a low risk, sure bet..
  21. The most common cause (I've found) of this behaviour is that memory allocated from the Labview (i.e outside the DLL) is freed inside the DLL. When the function returns, the original pointer Labview used for allocation is non-existent.. If the DLL does not return an error exit code, Labview assumes everything was ok and attempts to use it again (i think). A normal app would show you a GPF, but Labview is a bit more robust than that (usually) and normally gives an error. But it depends how catastrophic it was. You probably need exception handling for your dll, so that any GPF's or nasty C stuff, that breaks your DLL, still cleanly returns to labview. this is usually done in the DLL_PROCESS_DETACH of DllMain. This will mean that at least Labview will stay around for you to debug the DLL to find the root cause. However. If the error affects the program pointer. Then nothing short of fixing the problem will suffice. Rolf is the expert on this kind of stuff.
  22. Well. Doesn't sound too bad. 3 people should be able to support a number of production environments. You have a predictable time-scale for implementation that can be planned for and you use an iterative life cycle. Which one of your team came from production?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.