-
Posts
4,881 -
Joined
-
Days Won
296
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
Loop timing & Execution differences (non-RT system)
ShaunR replied to Stranman's topic in Application Design & Architecture
A while loop with nothing will be the fastest. But it will hog your processor (although I've noticed 2009 it only hogs 50% instead of 100%) and it will be hit and miss whether other similar loops get in or not. The next is usually a wait ms with a 0 wait time as it allows context switching so your other loops at least get a look in. Timed loops under windows are better used as a periodic function rather than (in the realtime context) a deterministic function since you are still at the mercy of the windows scheduler. The rule of thumb is. if its time critical......don't use windoze! -
Labview is highly optimised for "for" loops. They are very efficient. Far more so than most other array primitives/operators. Try it.
-
Litteraly...you don't need the for loop. Still works because its only detecting zeros which (in both your examples) only exist in the gaps.
-
Ahh. You can. You don't need the for loop either
-
I bet you say that to all the boys
-
I bet it isn't
-
For this signal, you can just look for zeros in your data which will be the start and end of gaps.
-
Probe Watch Window usability issue
ShaunR replied to PJM_labview's topic in Development Environment (IDE)
+1. I also don't like the fact that closing that huge probe manager closes all probes. I think it would be better if it was more of a dock, whereby we could drag them in and out and if we closed it only those inside it are closed. -
I think your main problem is probably becase you are trying to mix and match LV x32 and X64 device drivers. You cannot compile 64bit applications with labview x32 and vice versa (bit short sighted in my view) hence the reason I have both installed. Time will tell if LV is able to choose the correct device drivers to compile with depending on which LV version I am running (havn't got that far yet). When I installed, the device driver installer chose either x32 or x64 based on the labview version I had just installed rather than the OS. I chose the device driver custom install in the labview installer which prompted me to insert the dvd and it had already chosen the appropriate bitness (is that a word?).
-
Tape at all signifies an engineer. Bolt standard geeks use plasters. First degree? How many have you got? Contracting is good money, but it has a lot of downsides and work can be sporaddic. So did they?
-
I'm running Labview 2009 x64 in Win 7 (In fact I have both the 32 bit AND the 64 bit installed dide by side). They both installed without any problems, although the Developer suite DVD didn't have LV x64 only the 32bit one so had to download it instead (interestingly the device driver dvd has support for both and automagically chooses the correct ones). You might check first that the Labview 2009 you have is indeed the x64 version and not the x32 one. If this is ok you can try booting up and pressing F8 to get the menu and selecting "disable digital driver enforcement" (There have been a few posts about Win 7 being a bit pedantic about signing) and then trying to install. And as "belt and braces" choose to "Run As Administrator" in case permissions are the problem.
-
Steeerike 1! Its rare that engineers have the discipline to keep documentation if the environment doesn't require it. Nice one. Well. That's the other aspect of documentation.....traceability. If the formal project process is followed, the code is traceable right back to the requirements spec (the SOW is usually the response to a requirements spec). True engineer? Does that mean we all weigh 5 stone, have chronic acne, wear wire framed glasses and have a hunch from sitting at monitors all day? Is that what he means? Its part of our standard document deliverables, along with drawings and a maintenance manual. 3 paper copies of each (One for the maintenance dept, one by the machine and one for the engineering dept) along with one electronic copy of each on CD. Design by committee never works. Too many cooks etc, etc. I'm quite surprised, however that training wasn't part of the deliverables You get overtime?....wow. I've also heard the "This time we're going to do it right!" one before. Next time thay say that ask them what the project risks are and what are the contingency plans to mitigate them. If they stare at you blankly, it going to be the same as the last one. If they bore you silly in the first 2 minutes with schedules and plans, there is hope! You know at least they have thought about it
-
I still must be missing something here. I still don't see the purpose behind the "Executive" other than a translator between the "tests" and SOAP server which itself is a translator to the Master system. Lets assume the C# bit doesn't exist. No better, is completely transparent so as far as you executive is concerned it receives information about a test (e.g. test name and limits) directly from the Master System. Your executive receives this info about a test to perform and does.....what? Calls a single test? Where is the logic that selects a test based on the info received (going by your drawings). I may just be getting confused by the main and user interface, but it seems that the "executive" should be able to call more than one test so instead of your drawing showing an executive and user interface for each test (the tests are hard coded you say). It could just invoke the appropriate test and show its front panel (which is the user interface). The test can still be run locally by just dbl clicking on the test vi and your "executive" can invoke 3,4, n tests to run concurrently sucking up the messages from your queues and relaying them back to the Master System. In this scenario, the "Executive" is the same as your suppliers interface and there is no "executive" only the tests. Hmmm. What about if I described the little bubble in my noggin this way. If you combined the Labview Server with the Executive. Removed the interface from the executive (hide its front panel) and instead used the Test Vis front panel as an interface. Then gave the Executive the capability to dynamically load tests (i.e execute one, leave it running, execute another, leave it running and monitor the queues). I think then you would have pretty much the same functionality as your supplier with less hierarchical levels. This bit does....lol. I think you are describing 2 tests that may be standalone tests in their own right, but may also have to pause/wait if other tests are running concurrently. For this to happen you have (as I see it) three choices (others people may be able to see more, that's the beauty of forums ) 1. A single all powerful intelligent sequencer (classic) that knows what to do and when, and orchestrates everything. 2. Get the tests to chat amongst themselves and only bug the "Executive/translator" when something important happens. 3. Or a (what I call) dumb sequencer (probably a good fit for your topology) that doesn't know anything about the tests (only which tests are running) but routes requests and messages from the tests (that only know about their test) which wait until they get the nod. You are probably used to the first one. Intimidated by the second and never heard of the 3rd....lol. The way the 3rd one works is this. Day 1. Test A starts running and says "hey can I run now?" and waits. The Sequencer Says "Sure" cos the sequencer knows that test A is the only test running. Test A says "ta very muchly.....here's the result". They all go home to the missus. Day2. Test A starts running and says "hey can I run now?" and waits. The sequencer is silent because it knows Test B is already running and test B must get to its lunch break before Test A can start. About 12 o'clock, Test B says to the sequencer "I'm off to lunch now" and pauses. The sequencer finally says to test A "Sure". A mightily relieved Test A says "ta very muchly...here's the result".Later. Test B comes back from lunch and says "hey can I run now?". The sequencer says "sure" because it knows test A slipped out the back door early and now Test B is alone in the lab. Test B says "ta very muchly...here's the result". And they all go home to Test A missus...lol. Personification aside. What the third option would enable you to do is allow interaction with the master server either by pausing/stopping/reconfiguring the tests or relaying status information back to the Master since the "Executive/translator" is in the message loop (functional events if you like). Waffle, waffls...lol
-
Now that's something I'd pay to see.
-
Pretty soon you'll have to buy a license for each pallet item the way they seem to be modularising and licensing. I'm already up to 23 activation codes for my developer suite. It grows by about 3 licenses every year and I've had the same suite for 4 years.
-
I had a funny one today in LV 2009. I had a sequence engine running and one other vi running (mainly keeping image references in memory so I could stop and start the engine). The sequence engine would only execute (by that I mean its state machine would only go from state to the next) if the diagram was in the foreground or I moved the mouse over menu items (in the main menu) if the front panel was in the foreground. How the hell do you debug that?
-
Using user define events instead of queues.
ShaunR replied to dannyt's topic in Application Design & Architecture
If you can find my callbacks example (originally posted in the old forum) it does exactly this. A callback is installed to any invoked vi and fires when a control or indicator changes. When a control changes, the callback is invoked automagically, sending the control refnum as a parameter of the event. It is received in the event structure in the main vi and various information about the control (text, value, image) is displayed. This is all that's in the callback: -
Your big project is for your mother? I'm working on justifying something like this
-
Ok.I think I'm getting the gist of it. You will notice that the supplier has broken the direct tie (RMS WS and TCS WS) between the master test system and the test layers (as opposed to just the incoming via your c# server). This is because it enables them to completely manage their inter-process comms without limitation. They can not only interpret requests from the Master System and re-interpret in a form that the subsystems can understand , but can also use a far greater vocabulary for inter process comms and filter/re-interpret back to the master. I'm not quite sure what the difference is between the "Executive" and the "test" in terms of your labview program since the test vi will have a user interface and it seems only one test vi is used by each the "Executive" so the purpose behind "Main" isn't clear to me (generalised diagram?). I could understand it if the "Executive" could invoke or choose between multiple "tests" because it would basically be a plug-in architecture. But soldiering on..... I would have used a similar topology as your supplier with what you describe, but the interface layer would have been Labview . The interface would have basically been a client/server with a few special case statements. On the one side (RMS WS) it would include an dynamic loader which could take the test name from the master and invoke the "Executive" for that test, configure it and tell it things like Stop, exit, pause run etc (if it is something I have written or execute and close it if it is a 3rd party exe). Basically invoke the test and pass on the parameters from the master. On the other side(TCS WS) I would have a mechanism (probably a queue) that receives info (status, results, progress, errors etc) from the "Executive" (can be one or more), filters out local information and repackages or retransmits information destined for the master. How this would be realised is really dependent on how much control you have over the other parts of the system. If one of the tests is just an executable, you may be able to use DDE or perhaps it has a config file you can modify before executing it, but you are at the mercy of the forethought of the originator. If you have written the code, you can make it really slick.
-
I've just upgraded to LV2009 and........ Is it just me or does it make 8.6.1 look like a paraplegic sloth? It seems far quicker to load, a lot more responsive and debug execution seems brisk.
-
He must have got burned recently...lol. Just think of how cool it would have been to have said "what? like this one?" and plonked a document in his lap That was how it was at my place. Not any more, he, he. My templates are even on the intranet now. When the nuts are on the block (or whatever the female equivalent is) he/she who has the document wins! Other team members caught on pretty quick that my response to a customers' "that's not what I asked for/want/meant" was a black and white section in the SOW that they had signed. Closely followed by "would you like me to quote you for that feature". Internally I wasn't quite as harsh, but it is extremely good leverage for extending timescales if they don't like what they see because of poor communication. After all, you have proof that you did as asked/described and they signed it! Ooooh. I sound like a tyrant/quality engineer...lol. But seriously, My code reflects my documents rather than documents reflecting my code. That was the way I was trained and it has stood me in good stead ever since. I have an answer (in writing) for all the ney sayers and 80% of the documentation before I start coding. And it means I can offload the user manual (another thing I hate doing) to a Technical Writer .
-
250 GB SSD for about $1000 is about 2 man days ($ for $). Ya just have to convince the powers that be it will take you more than 8 days to to find a solution and code around the drive limitation (and throw in also that even then it may not work.....risk ) and highlight it will cut your delivery timescale by 2 weeks. Be creative.
-
There are techniques for handling agile specifications (google for iterative and incremental life-cycles). The only point I was trying to make was that software should be designed (which is actually your documentation) then coded, rather than coded then documented. It doesn't really matter whether your an old crusty like me and use word or a super "with it" and use a UML tool. You can usually get away with "growing" software if you are a team of one, but add another person or two, then it is imperative to document first. This is especially true if you have to interface to other disciplines. The other "human" aspect is that documenting is arguably the least stimulating task for a programmer so you are much less likely to do it at the end of a project than at the beginning. A tried and tested method to "manage" your customers/users/consumers if they are always changing the goal posts is to get them to sign up for an initial spec (Statement Of work) and if they want to change it tell them to make the changes to the document and you will quote accordingly, or, if it isn't chargeable (e.g. internal customer), to tell them the impact on the delivery date. This causes them not only to go away and think about what they want and put it in writing, but also forces them to justify the changes (to the signatories) and filters out non-imperative demands. After all..... They want everything for nothing n'est pas? Can't fault that. Good balance and will lead to straight forward, easy to understand code. I also strongly agree with last bit (i.e no hiding state selections in sub vis).
-
Indeed. I have nothing against keeping data in shift registers. What's using clusters or not got to do with design patters? I don't mind people looking at my code as long as they have read the spec first! That will tell them not only the design pattern, but how it works, why it works and above all which vi's do what. Please don't tell me you are of the impression that labview code is self explanatory with a few comments. Indeed. The globals and file access is for shared data (product numbers, limits, images etc). The only information a state machine needs to know (generally, not entirely) is what state next to execute and, as I think I said, if data is dependent on previous states, then they can probably be serialised in a single state. I wouldn't (for example) have individual states for create image, acquire image and process image and pass the image around. Instead I would have a single state (take image?) that uses a functional global to retrieve a pre-initialised image (blank image), acquire the image, process it then put the image back in the functional global before moving on to another state. That way the state machine represents the functional operations (move motor in, open gripper, dispense part, close gripper, take image, move motor out) rather than the discrete steps required to achieve the function (get motor position, move motor, stop motor, check motor position, get gripper number, open gripper, check gripper is open etc, etc).