Jump to content

ShaunR

Members
  • Posts

    4,856
  • Joined

  • Days Won

    293

Posts posted by ShaunR

  1. It's in the upgrade notes that old code will not run without a licence in future LV versions.

    So unlike the Event structure which is unavailable for editing in lower LV versions but can at least be USED, the mathscript removal also disses any past work which has been done.

    I don't like the way NI is going about this.

    Shane.

    Pretty soon you'll have to buy a license for each pallet item the way they seem to be modularising and licensing. I'm already up to 23 activation codes for my developer suite. It grows by about 3 licenses every year and I've had the same suite for 4 years.

  2. I wanted to post about how impressively fast I thought 2009 was compared to 8.6.1 but I didn't have all the module installed at the time and thought it might be due do that.

    But yes, I have noticed that it is quick to start, even with all the palettes loading on startup, RCF, etc...

    :thumbup1:

    I had a funny one today in LV 2009.

    I had a sequence engine running and one other vi running (mainly keeping image references in memory so I could stop and start the engine). The sequence engine would only execute (by that I mean its state machine would only go from state to the next) if the diagram was in the foreground or I moved the mouse over menu items (in the main menu) if the front panel was in the foreground. :blink: How the hell do you debug that?

  3. Accepting Callback VI Refnums - Same idea as the event reg refnum except the subscriber passes vi references to the registration methods. When the event fires the class automatically executes the callback VIs. The problem with this is I haven't figured out a good way to pass data with the events. If I invoke the callbacks using a Call By Ref node the callback has to finish executing before the thread returns to the class. When multiple callbacks are registered for a single event one poorly written callback could significantly delay (or prevent) other callbacks being executed. Maybe the answer is to require the subscriber to query for data after the event...Any other ideas?

    If you can find my callbacks example (originally posted in the old forum) it does exactly this. A callback is installed to any invoked vi and fires when a control or indicator changes. When a control changes, the callback is invoked automagically, sending the control refnum as a parameter of the event. It is received in the event structure in the main vi and various information about the control (text, value, image) is displayed.

    This is all that's in the callback:

  4. Interesting idea! However, I'd have to get 4 SSDs to replace my 1TB drive. And I'm delivering the system with 2 drives...

    I may go ahead and buy an SSD just to see if they're as fast as they say they are. My *real* application (the Big Project) would need to replace 70 Tb drives, about 6 times a year. I don't think that's happening any time in the near future! I'd have a hard time justifying that to my mother when she asks me if I've been spending her tax dollars wisely. smile.gif

    Your big project is for your mother? :blink:

    I'm working on justifying something like this :)

  5. Sorry, I have not had time to respond recently.

    ShaunR I like that you have taken the time to understand my requirement and I think you have done well with my limited description.

    I don't know where the boundry lies in what I am allowed to say so I have to be a little bit vague I'm afraid just to cover my own back, I hope you can understand.

    The set up is like this:-

    post-11721-125009554574_thumb.jpg

    Now what needs to be understood from this set up is that the blue dotted RF measurement section can be my software (I call it the 'test exec' this is based on a company legacy software that I will replace) or any of our other suppliers. It is designed this way because they can all be attached at once or in any combination to test different parts of the DUT at once they can also be run locally without the Test system controller to perform debug and development and diagnostic checks.

    I have this pretty much figured out, the interfaces were set by the rest of the team before I joined, they are using SOAP messaging, so I did what I could and implemented a layer in C# code which acts as a server that talks to LabVIEW and I built a native LabVIEW client.

    I am happy with this part of the solution. My executive works really well and handles tests well. I found more confidence and happiness after reading a couple of other post that explained a similar situation using QSM architecture. However my solution is a little convoluted due to the teams choice of using SOAP which not handled well in LabVIEW.

    The overall system is designed this way to remove any knowledge of the testing from the test controller to make it usable in all projects. The only parts that are test specific are the data in the database and the Test VI's (in my case) my executive doesn't know what the test is it just loads the test based on a single parameter in the execute XML received from the Test Controller.

    OK, now then, my original question was based around the best way to implement what our supplier is doing for this current project. They are effectively running two testers at the same time that work in collaboration to acheive a more complex tests. There is no direct requirement to acheive this straight away but thought it may be worth looking into now as I could add hooks, loops or what ever you want to call them as I can, to help later.

    They have done this by adding a layer above their tester which is a comms layer. This layer accepts the comms form the Test system controller and then uses a different simplified dialog/protocol to talk to the testers once the SOAP has been decyphered.

    Sort of like this (excuse the quick and approximate drawing):-

    post-11721-12500989542_thumb.jpg

    I wanted to see how others would approach this sort of thing. Does this make any more sense? This idea seems the most obvious to me and is fairly simple. It envolves an extra layer of communication which I think I could expand my C# interface and use this to work out to which tester the commands should be sent.

    I hope that atleast this gives you all an idea of what I am doing and hopefully explains why I am not using TestStand. I will try and answer any more questions if I can.

    Many Thanks in advance for all/any help.

    Neil.

    Ok.I think I'm getting the gist of it.

    You will notice that the supplier has broken the direct tie (RMS WS and TCS WS) between the master test system and the test layers (as opposed to just the incoming via your c# server). This is because it enables them to completely manage their inter-process comms without limitation. They can not only interpret requests from the Master System and re-interpret in a form that the subsystems can understand , but can also use a far greater vocabulary for inter process comms and filter/re-interpret back to the master.

    I'm not quite sure what the difference is between the "Executive" and the "test" in terms of your labview program since the test vi will have a user interface and it seems only one test vi is used by each the "Executive" so the purpose behind "Main" isn't clear to me (generalised diagram?). I could understand it if the "Executive" could invoke or choose between multiple "tests" because it would basically be a plug-in architecture. But soldiering on.....

    I would have used a similar topology as your supplier with what you describe, but the interface layer would have been Labview :):P . The interface would have basically been a client/server with a few special case statements. On the one side (RMS WS) it would include an dynamic loader which could take the test name from the master and invoke the "Executive" for that test, configure it and tell it things like Stop, exit, pause run etc (if it is something I have written or execute and close it if it is a 3rd party exe). Basically invoke the test and pass on the parameters from the master. On the other side(TCS WS) I would have a mechanism (probably a queue) that receives info (status, results, progress, errors etc) from the "Executive" (can be one or more), filters out local information and repackages or retransmits information destined for the master.

    How this would be realised is really dependent on how much control you have over the other parts of the system. If one of the tests is just an executable, you may be able to use DDE or perhaps it has a config file you can modify before executing it, but you are at the mercy of the forethought of the originator. If you have written the code, you can make it really slick.

  6. I just about went off on another rant about my job. Two mornings in a row would be too much.

    Reader's Digest version: I do have the luxury, and the pain, of being a "team of one". Everyone else here is working in some variant of C. I interface to a lot of different systems, and those interfaces I insist on documenting. With external documentation. What I was trying to address in my original post is internal documentation for *me* when I have to go back 3 years later and remember what I was doing to reuse code. Unless I get hit by a bus on the way to work someday, the odds of anyone else ever looking at my code are pretty slim, so any code cleanup/documentation of my own software is entirely for my own benefit.

    The culture here is to do everything on-the-fly. Mid-range projects ($250k+) and up have SOWs and appropriate top level design docs, but after those pass review, all bets are off. "Design meetings" consist of me and a user sitting down in front of a computer and them telling me something like, "I want a button that will export all this spectra data to Origin." Delivery dates are nebulous as testing is continually ongoing and code can often be delivered at any time. All my customers are internal, and while I can't say money is no object, if someone wants something badly enough someone can generally be found to fund it. The good side of all of this is there is a very strong sense of support for the concept of "do whatever it takes to get it done", but unfortunately it's often accompanied by "just don't bug me about the details until it's done and then I'll let you know if it needs to be changed". That's just the way things work around here.

    The good news is that my (relatively new) team leader came to me the other day and suggested we start generating SOWs for even our little 2 week tasks! I got all wide-eyed and he thought I was going to protest, but I happily agreed. It's not detailed design documentation, but you have to start somewhere. smile.gif

    He must have got burned recently...lol. Just think of how cool it would have been to have said "what? like this one?" and plonked a document in his lap :cool:

    That was how it was at my place. Not any more, he, he. My templates are even on the intranet now. When the nuts are on the block (or whatever the female equivalent is) he/she who has the document wins! Other team members caught on pretty quick that my response to a customers' "that's not what I asked for/want/meant" was a black and white section in the SOW that they had signed. Closely followed by "would you like me to quote you for that feature". Internally I wasn't quite as harsh, but it is extremely good leverage for extending timescales if they don't like what they see because of poor communication. After all, you have proof that you did as asked/described and they signed it!

    Ooooh. I sound like a tyrant/quality engineer...lol.

    But seriously, My code reflects my documents rather than documents reflecting my code. That was the way I was trained and it has stood me in good stead ever since. I have an answer (in writing) for all the ney sayers and 80% of the documentation before I start coding. And it means I can offload the user manual (another thing I hate doing) to a Technical Writer :thumbup1: .

  7. Wouldn't that be nice!

    Maybe as soon as they are available in 1Tb versions. For under $150.

    I'm not holding my breath.smile.gif

    250 GB SSD for about $1000 is about 2 man days ($ for $). Ya just have to convince the powers that be it will take you more than 8 days to to find a solution and code around the drive limitation (and throw in also that even then it may not work.....risk :P) and highlight it will cut your delivery timescale by 2 weeks. Be creative.

  8. Shaun, unfortunately I am not developing in a vacuum. No one in my organization has the luxury of externally documenting code to the level you are able to. I tried it when I started working here eons ago, but the reality is that requirements for the projects I work on are an ever-moving target. It used to drive me crazy, until I finally gave up, drank the kool-aid, and just started trying to go with the flow. I code a basic structure, go back to the users, get their input, code a little more, add another function that someone just decided they can't live without, code that up, go back to the users, etc, repeat, until everyone is (relatively) happy. Then I take my project out on the test platform, and discover that in the heat of battle, it's really used completely differently than the users thought they'd be using it, or the data stream something else is supplying to me is full of errors I have to compensate for. I can't just say, "Sorry, that wasn't in the original specification and you're not getting it." One of the benefits of being here for so long is that I can generally anticipate changes/additions that might be requested, but my users are always surprising me.

    I still do a top-level initial design, but the vast majority of the time the final product is very different, and any detailed design I would have done is obsolete. Oh, and then there's the harsh reality that nobody who's paying me wants to pay for detailed external code documentation. I’m not saying it’s right, but it is reality (mine, at least). So, it's very important to me to make sure I internally document the code as I go along. That is one place I can more-or-less keep up with the infinite changes. Hence my original question.

    There are techniques for handling agile specifications (google for iterative and incremental life-cycles). The only point I was trying to make was that software should be designed (which is actually your documentation) then coded, rather than coded then documented. It doesn't really matter whether your an old crusty like me and use word or a super "with it" and use a UML tool. You can usually get away with "growing" software if you are a team of one, but add another person or two, then it is imperative to document first. This is especially true if you have to interface to other disciplines. The other "human" aspect is that documenting is arguably the least stimulating task for a programmer so you are much less likely to do it at the end of a project than at the beginning.

    A tried and tested method to "manage" your customers/users/consumers if they are always changing the goal posts is to get them to sign up for an initial spec (Statement Of work) and if they want to change it tell them to make the changes to the document and you will quote accordingly, or, if it isn't chargeable (e.g. internal customer), to tell them the impact on the delivery date. This causes them not only to go away and think about what they want and put it in writing, but also forces them to justify the changes (to the signatories) and filters out non-imperative demands. After all..... They want everything for nothing n'est pas? :P

    Here is my way of coding state machines:

    * (Next)State has a distinct wire. The selection of the next state might come from a SubVI, but the actual selection is done in the state machine VI. For example a SubVi might have a boolean output 'Abort'.

    * Error has a distinct wire. Most of the time I check each iteration of the while loop for an error and go to the error handling case.

    * Data is 1 or 2 clusters. Most of the time I distiquish between 'Parameters' for Scalar Data and 'Data' for Array Data

    * Other values that are constant inside this VI ('Recipe', 'Measurement Settings', References of various kind like File, Instruments) -> they don't need the shift register.

    <snip>

    Just to repeat, I'd put everything that makes the SM a SM in that top level SM.vi. All state tranisitions and the overall picture of which states might lead to which states.

    Can't fault that. Good balance and will lead to straight forward, easy to understand code. I also strongly agree with last bit (i.e no hiding state selections in sub vis).

  9. This seems kind of extreme. It's also unnecessary. As long as the data relates to the state machine then it's valid to keep it in a local shift register.

    Indeed. I have nothing against keeping data in shift registers.

    It also makes it clear to the observer, what design pattern you're using (don't tell me, you don't care about other people looking at your code right?).

    What's using clusters or not got to do with design patters?

    I don't mind people looking at my code as long as they have read the spec first! That will tell them not only the design pattern, but how it works, why it works and above all which vi's do what. Please don't tell me you are of the impression that labview code is self explanatory with a few comments.

    Using functional globals and file IO within the same VI to pass data feels like you're using the wrong tool for the job at hand. I don't see the benefits.

    Let me make this clear. I'm specifically focusing on data that is only valid in the context of the current state machine

    Indeed. The globals and file access is for shared data (product numbers, limits, images etc). The only information a state machine needs to know (generally, not entirely) is what state next to execute and, as I think I said, if data is dependent on previous states, then they can probably be serialised in a single state. I wouldn't (for example) have individual states for create image, acquire image and process image and pass the image around. Instead I would have a single state (take image?) that uses a functional global to retrieve a pre-initialised image (blank image), acquire the image, process it then put the image back in the functional global before moving on to another state. That way the state machine represents the functional operations (move motor in, open gripper, dispense part, close gripper, take image, move motor out) rather than the discrete steps required to achieve the function (get motor position, move motor, stop motor, check motor position, get gripper number, open gripper, check gripper is open etc, etc).

  10. I have only a very limited state data cluster in my state machines and it is limited to containing data that is important for most of the states and directly important to the state machine itself. The rest of my application data is stored in various functional variables (uninitialized shift registers with different methods).

    Ditto. In fact, my implementations of state machines have only 1 piece of data passed from case to case; the next state to execute (a single enum). Anything else is either gleaned from functional globals or from files. No clusters whatsoever are used to transfer info from one state to another. If a particular state is reliant on previous state information, then it is highly probable that they can be serially linked the good old fashioned way.

  11. Let me get this right in my limited stack. Correct me where I've got the wrong end of the stick.

    You have a Master Test System (that is written by a 3rd party..not yourself )that manages a whole raft of tests including yours. This system communicates its desires (Test No./Name, number of times to execute, pass/fail criteria) that your "Executive" should execute via some sort of "translation" interface. Your executive goes away and tests the sub-system and then returns the results back to the Master System. A discrete test that the Master system must wait for. Where the Master System knows all and your sub-test just takes parameters and returns results.

    This is how it appears after reading and is fairly straightforward except what you mean by "Executive". Many people use "Executive" and "Sequencer" synonymously. I tend to see the difference as an "Executive" manages "sequences" soin my mind, your Master Test System would be an "Executive" and your sub-test(s) would be a sequence(s) and by that definition, you only ever have one "Executive".

    But the above seems a little oversimplified (I only get a sense of this) since anyone going to the lengths of incorporating multiple languages and defining XML interfaces to sub-tests, probably have a more flexible system in mind.

    Certainly in similar systems I have worked on, the "sub-test" defined in the Master Test System is an entry point into a number of sub-tests. So in your example the entry point would be "RF Tests" and there would be "sub-sub-tests" like Output Power,Carrier Drift, Power Density etc. The question here is where the partitioning is and how simple you want the configuration of the "Executive/Master Test System" to be. Do you still want all the parameters defined in the Master System (huge amount of configuration information for EVERY test), or a simplified alias system where parameters are handled locally by each test. The latter is the preferred topology in production environments where different tests are maintained/designed by different teams, keeping the "Executive" simple and distributing maintenance workload across teams.

  12. BUT, my real problem is being created by my disk not being able to keep up with my data rate. I am saving data in both it's raw form (data and lots of status info about the data packet) and in partially processed form, in order to decrease the post-processing time. This amounts to streaming to disk at somewhere in the neighborhood of 56MB/s. My disk can only keep up with that until it's about half full. My fallback position is to just save the raw data and spend more time post-processing after the fact. That cuts my write rate almost in half

    Sounds like a good excuse to get a SSD. :yes:

  13. I'm ripping apart some code I wrote a couple years ago to "reuse" it for a new project. A main chunk of it is a state machine. When I'm in the midst of programming something, I generally have all the inputs and outputs to the different states more-or-less memorized, but in this case I was flipping in and out of states a lot trying to figure what was where. This led me to trying to come up with a good way to document state machines. I started with a typical SM:

    post-9165-124966669949_thumb.png

    Below is how I used to do it. As documentation goes, it's the best way, for programming, however, it's a pain to add/subtract inputs/outputs, negating one of the main reasons to use bundles/state machines. One of my states uses 13 inputs...

    post-9165-124966682275_thumb.png

    So I went to this:

    post-9165-124966685676_thumb.png

    What I don't like about it is that it loses the color-coding of the inputs/outputs. The colors really help make it easier to find things. And I have to be very anal-retentive about keeping it updated.

    So I tried this:

    post-9165-124966698951_thumb.png

    It's easy to do, just go into the state vi, copy the unbundle and bundle, and paste them in the SM. The problem being, of course, that it breaks the code, and requires disabling the code. It's actually easier to read than it looks like from this pic, but it still is on the "foggy" side.

    Anyone have other ideas? Or has this one already been solved and I just wasted a half hour making pretty screen shots? smile.gif

    Seems a bit arse about face to me. :blink:

    Call me old fashioned, but..... I start with a word document (detailed design spec) and define flow/transition diagrams for each state machine then code from that with plenty of comments and references back to the original spec. If you want you can copy and paste the transition diagrams into the vi, but I don't bother since anyone modifying it should at least be able to read a spec. :D

  14. QUOTE (crelf @ Jun 5 2009, 06:07 PM)

    Not true - I've used scripting in several distributions that I've provided to customers. Not built executables, but distributions all the same.

    Semantics.

    If we want to be technically correct, I think I could have said "Binary Distributions". But the sense was correct.

    I would (tentatively) suggest that your "distributions" come under the heading "Tools for programmers" anyway. That is a small market in comparison to "compiled and configured" executable/binary distributions, especially in an environment which has historically been very open with source contributions.

  15. QUOTE (hooovahh @ Jun 2 2009, 10:46 PM)

    I think that expression is up for interpretation on the compiler.

    Does it mean that if I take a picture of you with beer you will shake your finger at me? Or does it mean I should not take a picture while I am under the influence of beer? Does it imply that pouring beer into a digital camera will cause it to turn into a yellow face shaking his finger? or is that a near by person shaking their finger because I broke their camera by pouring beer into it?

    It means there are too few smiley's on this forum :P

  16. QUOTE (PaulG. @ Jun 2 2009, 09:28 PM)

    Indeed. That is why assembly isn't much "faster" than C now days. I've heard it said: "C is the new assembly language." Compilers are getting that good.

    As far as assembly being "one of the least verbose" I will have to disagree. Assembly needs 25 lines of code to turn an LED off and on. :blink:

    I know what you mean. I guess it depends on what is meant by verbose.

    add r2, r3, r4 is less verbose than (say) MyEnormouslyLongResultName := MyEnormouslyLongVariableName + MySecondEnormouslyLongVariableName.

    Thats why pictures are better :beer: + :camera: = :nono:

  17. QUOTE (jlokanis @ Jun 2 2009, 06:35 PM)

    Like this one :P

    post-15232-1243971787.png?width=400

    QUOTE (JCFC @ Jun 2 2009, 05:36 AM)

    I have a question: Can Labview beat those Programming Languages?, How Labview perform doing that tasks?

    Interesting. Most of the languages I've never heard of and quite a few common ones missing. I would like to have seen Assembly in that mix since its one of the least verbose and fastest, so I guess it would be close to the "ideal". I think the title is a bit misleading though. since really it is a test of the compiler optimization rather than language.

  18. QUOTE (PaulG. @ Jun 1 2009, 05:16 PM)

    I do that all the time without scripting. Read the file, load the string values into a text ring, then replace the text ring with the enum. Or did I miss something? :)

    This also works with DAQ tasks :).

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.