-
Posts
4,940 -
Joined
-
Days Won
306
Content Type
Profiles
Forums
Downloads
Gallery
Posts posted by ShaunR
-
-
It seems I'm having more than my fair share of LV crashes. I'm running 2009 SP1 on a Windows XP Pro. Almost daily I experience a problem with LV crashing, either letting me know that the last time it shut down bla bla bla would you like to investigate bla bla bla. At least once a week I can't even get LV to startup and have to reboot my machine. Today the only way I could get it to start was to double-click my project shortcut on my desktop. That was dumb luck. Is there a super-secret setting I'm missing someplace in my .ini file or a super-duper-super-secret series of keystrokes I need to make LV a little more robust?
Or is my intuition correct in telling me that LV 2009 is the buggiest and most unstable LV version I've ever used (and I've been at this since LV 5) and it's time to move my customer to 2010 because it just might be a little better?
I absolutely hate and will resist at all cost changing sofware versions mid-project (actually, in this case, tail end of project) but 2009 makes me want to pull my hair out.
I think its been slowly getting worse every iteration. There used to be a time when you might see "insane object at..." maybe 2 or 3 times a year, but you had to be abusing it. I often get 2009 disappearing when I delete something.
But its better than 8.x (IMHO) which was a complete crock...But I think NI realise this which is why they have said 2011 will be a "stability" release. My personal opinion that any software shouldn't be released UNTIL it is stable.
Don't bet that 2010 will be better. See how many bugs are still existing from 8.x. (2010 Known Issues)
-
Hello,
I am struggling with the multi-loop architectures these days, can anybody share a multi-loop design pattern with the error handling loop included?
Thx
I think most people use some sort of Queue based error handling where each loop/subvi/process/task places a message on the queue which is handled by a dedicated error task.
-
Not sure if this is what Yair was talking about (didn't really understand) but.....You can also load a sub-panel within a sub-panel and instead of overlapping, nest them vertically. This might achieve the same effect,, but you would handle the event in the top level vi by inspecting the ctrl reference.
-
Why are you using polymorphic VIs?
They are data bound. Shouldn't you be using dynamic dispatch or some other dark magic to choose the instance at run-time?
-
Seems this is a known problem that was fixed in later updates (allegedly).
-
Generate a user event and handle that in the sub.vi? Haven't tried it, but it'd probably be one of the first things I'd try.
-
What should the config schema look like? How will I simplify setting up a test in the future? How do I stupid-proof the config file? What are the other signal parameters I haven't thought of?
I thought a bit about this. Below is an example of a simple "possible" DAQ config file.
One thing you can do to poke-yoke the file is have a "default" button which reloads (but does not save) the original config you decide on. That way they can always get back to a known "good" config and have "commit" and "Save" buttons, One which is temporary and will not be retained between launches of the software but allows them to "play", the other saves over the previous file. You can also do other stuff like having excel macros or create an interface for entry checking etc, but its not really necessary. Its really up to you. Its very flexible and scalable.
-
In my project I have one task, 8 channels, and 8 scales. As near as I can tell users can't edit those in Max so this is effectively a default setup.
I'm not sure about that. From the last time I used MAX. It was a case that in the project you create a MAX database file which is deployed with the installation (under the build specifications >>New installer>>Hardware Configuration). If that is the way you are thinking, then your "default" will only be applied every time you install as well as deleting any changes or additional tasks. Additionally. Once it is in MAX, I'm unaware of a method to "lock" a task so that it cannot be edited (jump in here JG) .
However, if you create that task dynamically (delete if exists, then add it again) every time you run your software, you will have a task that can be reset to default just by re-running your program (or by pressing a button). And if you do that you have the major component of the file system/database implementation
I believe any custom setting will require a new task, channels, and scales to be set up in Max. New tasks are created only when significant parameters are changed, such as the number of signals being captured or changing the input terminals. Each Max scale will apply to a specific signal from a sensor, so there will be up to 24 of them. (On second thought... there'll be one scale for each setting on each signal, so there could be a lot of them.) I'll have a global virtual channel for each DAQ terminal giving me 8 of those. I think those scales and virtual channels will be sufficient to cover all combinations.
This bit, I think, will cause them to moan quite a lot as well as being extremely error prone. If you had a way to "copy" the default then I don't think it would be so bad, but I'm unaware of a way to do that in MAX.
When the test engineers are setting up a test they can go into Max and look at the Task details to see which terminals should be connected and change the sampling rate if they want. Then they go into the individual channels and modify the settings and scale to match the sensor signal they have connected to that terminal. There are a lot of scales to choose from but they are edited rarely, such as when I get an updated cal sheet for a sensor. If I'm thinking about this right, this should give the test engineers a lot of flexibility without completely overwhelming them. Are there any gaping holes I'm missing?Well. You could update the scales directly from the spec sheet (or an automated derivative) to make yours and their lives easier.
Starting and stopping tasks during the test is out of scope right now. If they *do* come to me with that request, I'll roll my eyes, sigh loudly, and make them feel bad for even asking.No abort button ?
What I meant was actually covered in your previous description, where they have to create a new task.
That is a concern. The application doesn't actually use Max though. The app only cares about the Task. Max is just the interface for configuring the Task that will be run. I think I can design the app in a way that will make it easier to insert a new module that provides an alternate way to configure the Task for a given test.
Indeed. Your application is relying on the most error prone part of the process (configuring MAX). This is what worries me. But I'm not sure what module you would want to write to configure DAQmx, since the whole purpose behind using MAX is so that you don't have to is it not?
-
Cool project. Be interesting to hear about what you come up with.
I don't know much but them, but the things I do now about scanners is that they have on-board controllers. But, the motor is usually a stepper motor where each step is defined by the Y-axis resolution. A 300x300 dpi scanner means that the motor is stepped 1/300th of an inch for example (assuming a letter sized page). I suppose the worse case might be that you just take out the drive and axis and control it directly although I don't know what the interface is but it wouldn't take you long to figure it out.
-
Hi ShaunR,
I was take a look those VIs, and try to implement into my VIs. The facts is just the other errors come out, I'm stuck
What are the errors?
-
1
-
-
Hi friends,
I am doing some experiment on vibration, where i am using 8 accelerometer with 2 WLS-9163 DAQ with NI 9234 Analog input module to acquire data from transducer.
I have configured both the WLS-9163 DAQ with automation and explorer successfully.
Problem Description
I was able to acquire data from Both WLS -9163 DAQ with NI 9163 seperately. Now i need to acquire data from all 8 transducer at a time, but NI 9234 has only 4 channels, so i am using 2 WLS-9163 DAQ but my laptop WiFi can take only one client at a time i.e, i can connect to only one WLS-9163 . I need help to synchronise both WLS-9163 DAQ so that i can get all data from 8 Transducers . I visited NI website also i got some idea but it is not working properly, i will attach that with this topic, if anybody in this forum working on that or already have idea about this plz help to sort out this problem.
My e-Mail Id :zoom143.ashwn@gmail.com
I'm not sure what you were reading on the ni website. But I think you'll find you may need a wireless router. If you are using windows 7 you can turn your laptop into one by using this
-
It's not just the DAQmx part of it, it's also that that aspect of the test system requirements aren't very well defined yet. I do know we won't be using anywhere near 192 different sensors. At most there'll be a pool of maybe two dozen possible individual signals, each with anywhere from 3-6 selectable scales, each with their own calibration info, each potentially measuring something different (acceleration, temp, etc.) We'll be capturing up to 8 signals at a time on an arbitrary terminals. The sample rate isn't known yet.
Indeed.
So Lets say you use MAX. They create 24 "tasks", set up values for scaling and calibrate each channel (probaly a good 1/2 days work). Then they want to change something on one of the tasks. Do they modify the original task? Or do they create a new task, set up the new scales and re-calibrate on the premiss that they don't want to "lose" the old settings because they might come back to it?.So now we may have 48 tasks
Lets say they they keep to 24 tasks. Then they come to you and say "right, we want a piece of software that logs tasks 1,3,5,and 9 except on Wednesday, when we'll be using 1,6,12,8". How do you store that information in MAX?
Should all that info come from the config file? Should I use a combination of Max and config files? What should the config schema look like? How will I simplify setting up a test in the future? How do I stupid-proof the config file? What are the other signal parameters I haven't thought of? There's lots of questions I don't have answers to right now. So I punted.
That's up to you.
You'r the only one that knows what tests and what's required. I think what you will find (personally) is that you start off using MAX then as things progress you will find more and more you need to have more external control until you reach a point where you have so much code just to get get around MAX that it is no longer useful and, in fact, becomes a hindrance. But by that time you are committed. Thats just my personal experience and others may find it different.
We actually use several files. One for Cal data, one for general settings (graph colours, user preferences etc, etc), one for each camera (there can be up to 5), one for DAQ (basic config) once for drive config and one for test criteria. The operator just selects a part number (or a set of tests if you like) from a drop down list and can either run full auto, or run specific test from another drop down list filtered for that product (well. Not filtered since it it just showing the labels in the test criteria file
). But having a directory structure makes that really easy, since all it is doing is selecting a set of files.I think that type of interface would be a bit further down your life-cycle. But the building blocks started out just as you are currently doing and all that we did was put them altogether in a nice fancy user interface (it even uses sub-panels to show some of the original UIs we created when testing the subsystems).
-
1
-
-
Normally that is what I try to do. In this particular instance I'm choosing an option I know (thanks to feedback on this thread) may present scalability issues in the future. Why? It's the less-risky option right now. DAQmx is new technology for me that I don't understand well enough to be comfortable rolling my own Max interface. I could spend a lot of time learning DAQmx and implementing something that ultimately doesn't work. Instead of risking wasting that dev time, I'll abstract the Max aspect of the app and replace it with something that scales better when the need arises.
I think you are just intimidated by the fact you have not used it before. 30 minutes and a few examples (there are lots) with a usb DAQ should be enough. You will quickly discover its not really that different from using VISA, TCPIP IMAQ or any other "open, do something close". Heck. Even use the express VIs and you will have most of the functionality of MAX.
-
In the same vein as Shaun, and beginning to push the off-topic envelope further, I get concerned when I hear about OpenG and other 3rd party tools seeming to become "coin of the realm". Yes, I'm an oldline LV hack (and an older line C/Unix hack -- and I DO mean C), but I don't like having to rely on 3rd party tools that are tied to certain non-NI implementations. For my purposes it works far better to use only NI "tools" and, no, at this point I haven't found scripting to be directly useful; so Shaun that club has at least doubled in size.
On the other hand, would it be useful to ALL (or most) LV users to have xnode functionality more exposed -- perhaps. I don't know. Was it useful for scripting to be officially released -- again, perhaps. But not if the outcome of that release is more 3rd party tools instead of one integrated environment that has a large, multi-use/multi-user base. After all, I'm STILL using the Blowfish implementation based on a 1998 CIN implementation! It was the ONLY option for a long time and still remained the best option -- for me (yes!) -- after other options became available. I really don't want to see similar patterns repeated.
And that's why -- despite the HIGH quality of JKI tools -- I always ask: Is there a way to implement "x" WITHOUT having to use OpenG or...
OK, I'll slink back into the shadows...too much work to do.
Couldn't agree more. There's nothing more annoying to me when I see a piece of code that I'm interested in to find out, I have to download VIPM then the RCF and also install 5 other openG libraries that I neither want or use.
I wonder how many people actually read all the licensing and actually do distribute the source code, licensing and associated files when they build an application with 3rd party tools? (not necessarily openG) Might be a good poll
-
Hi ShaunR,
The programs was work perfectly. But when I press stop button in the client side, the programs was stop running. I put a while loop over the client diagram to make it able to play again after it stop but it doesnt work. (what i'm trying to do is client side able to play and display the image from the server side even its stopped, so it able to play over and over during system running)
Do you know how?
Take a look at Data Client.vi and Data Server.vi in the NI examples.
-
1
-
-
Need you ask?
I've held positions across nearly all the stages of a product's life cycle: Conception, development, transition to manufacturing, manufacturing sustaining, etc. The only stage I haven't been involved in is end-of-life.
Well. You never know. Its a bit like mathematicians. there are pure mathematicians and applied mathematicians. Pure mathematicians are more interested in the elegance of arriving at a solution whereas applied mathematicians are more interested in what the solution can provide.
Well you've got the control and the expertise. But maybe not the tool-kit that comes from programming in those positions
But back to MAX. I (and the engineers that use the file system) just find it much quicker and easier to maintain and modify. Like I said. We have lots of IO (analogue and digital) and find MAX tedious and time-consuming.A single excel spreadsheet for the whole system is much easier. And when we move to another project we don't have to change any configuration code, just the spreadsheet which can be done by anyone more or less straight from the design spec (if there is one
).
But you know your processes. A man of your calibre I'm sure will look at the possible alternatives and choose one that not only fixes the problem now, but is scalable and will (with a small hammer) fit the tomorrow.
-
Give this a try.
-
1
-
-
Hi ShaunR, I have a question regarding these VIs.
I want to put play and stop button at the client side.
So when I press play, the camera show the images and when press stop, camera stop receiving the images from the server. And that case may loop over and over again.
Is it possible to implement?
Yes. Take a look at Data Client.vi and Data Server.vi in the NI examples.
-
This was a question at NI Week 2010 Coding Challenge.
You should have entered!
If it was in Oz I probably would have
-
Take a look at Data Client.vi and Data Server.vi in the NI examples.
It uses 1 channel. The client sends back the letter Q to the server (on the same connection) to stop the server sending data.
Oh. And you can get the IP address by using "IpTostr" and "StrToIP" instead of executing IPconfig and formatting the result.
(I'd post a picture, but for some reason image uploading is failing)
Weird. Upload fails if I change from quick to full edit. But straight reply is fine.
-
1
-
-
Take a look at Data Client.vi and Data Server.vi in the NI examples.
It uses 1 channel. The client sends back the letter Q to the server (on the same connection) to stop the server sending data.
Oh. And you can get the IP address by using "IpTostr" and "StrToIP" instead of executing IPconfig and formatting the result.
(I'd post a picture, but for some reason image uploading is failing)
-
1
-
-
Well. they certainly look OK to me
-
Sorry my friend but I have to disagree with you. I believe what you are saying is different from what he suggested.
From what I understand you suggest not to take into account the gain in this calculation, but take into account the reading of this gain. I believe this is wrong for two reasons:
- If you take a closer look at the two images that I have posted, you will see that in the first one the range in the DAQ board is ±0.05 (gain=100) while in the second one the range is ±0.5 (gain=10), which means that the accuracy calculator takes into account the gain in the calculation.
- The calculation 0.001203*0,0588/100+ (100 +5,04)*10-6=0.1057mV (which you told me) is different from the result of the accuracy calculator (0.0018mV) if the input value is as you suggested below
So the calculation you suggested me would not be in error by a factor of 100, but it would be decreased by 70mV from what I had calculated.
Anyway, I have to agree with you on the thing that this is a "black box"
thank you ShauR for you thorough answers, you helped me a lot
I fail to see where in
Absolute Accuracy = +/-[(Input Voltage x % of Reading)+ Offset +System Noise +Temperature Drift]
gain is used since it is a sub-component of "Reading".
I took your word on the 100+5.14 since I didn't have that info (neither could I find the 28,9 +2,75 in the spec pages you pointed me to (which is where the 70mv lies) if that is the "system noise" and offset) . But it was glaring obvious that 0.1203 was incorrect. Perhaps I should have said "about 100"
But you have an answer you are happy with so that's good.
-
I have never understood those people who always go for the cheapest DAQ just because its the cheapest.
IMHO, the most powerful thing about NI is not their hardware and not LabVIEW (don't get me wrong these are both fantastic), its their Drivers - the connection between the IDE and the hardware.
Again, IMHO, for the extra cost upfront of going with more expensive hardware, you are going to save a bucket load of time (and therefore money) by having a reliable Driver set e.g. standardization/familiarity, support, flexibility, upgrades etc...
So a lot of the time (for what we do) it makes sense to choose NI. (However, we are also an Alliance partner, so we are bias)
You are quite right. It is the synergy between their hardware and the software (sometimes we forget Labwindows) that makes them the obvious choice. And one of the main reasons Labview is as successful as it is is because. It turns a software engineer into a systems engineer (much more useful
) However, if all you need is a dumb remote analogue or digital device then the cost of cRIO or field-point cannot be justified ($2000-$4000) against a $200 ethernet device from another well known manufacturer.
But having said that, I think it has more to do with confidence and experience than anything else.I am comfortable interfacing to anything in any language (but I will fight like buggery to use Labview
). If someone has only used labview and only knows labview products, then its a low risk, sure bet..
Running LabVIEW app on CDROM without run-time engine installed
in Application Builder, Installers and code distribution
Posted
I remember that article too. I also remember creating a distribution with just the lvrte.exe and the advanalys.dll (the latter was required for a lot of the trig/maths functions). And, if I remember correctly, if you were using other features you had to include those (like the VISA, DAQ and IMAQ dlls).