Jump to content

ShaunR

Members
  • Posts

    4,942
  • Joined

  • Days Won

    308

Everything posted by ShaunR

  1. Only on LAVA The "Mantra" I was referring to isthe comment about re-use and inflexibility I disagree vehemently with what you are saying here (maybe because you've switched the subject to "Most LV users/programmers"?). I'm sure it is not your intention (it's certainly out of character), but it comes across as most LV programmers and Labview programmers ALONE , are somehow inferior, lack the ability to design, plan and execute programming tasks. I thought that being on LAVA you would realise this is simply not true. There are good and bad programmers (yes even those with formal training) and not only in Labview. Whether a programmer (in any language) lacks forethought and analysis/problem solving skills is more a function of persona and experience than the programming language they employ. It comes across that you view Labview as an environment that "real" programmers wouldn't use. It's verging on "elitist". Most traditional LV programmers......OK. Lets qualify this. I consider "traditional" programmers as those who do not employ LVOOP technologies. I also use the term "Classical" on occasion too. Traditional LV programmers would, for example, use an "action engine" (will he bite? . I know it's one of your favourites) in preference to a class to achieve similar perceived behaviour. But. On to the my type-def proposal. It's not a "proper" solution? How so? How does the mere fact of using a "type-def preclude it being one or indeed re-usable?. Using a typedef with queues is a well established technique. For example it's used in the "Asynchronous Message Commucication Reference Library", You are right. in one aspect The OP on the other thread is considering issues and he nay have goals that perhaps not even you can predict . But that does not mean it it should not be considered. Without the use of OOP, What other methods would you proffer (we are talking about "Most Traditional Labview Programmers after all). For deductive reasoning,it must be valid, sound and impossible for the result to be false when the premises are true (which in my mind make the exercise pointless anyway).The conclusion can be false (it might be possible) since it can be neither proved nor disproved and premise 2 is not sound since it is an assumption and cannot be proven to be true. They also don't work well for generalisations because "generally" a premise may be true, but not exclusively. Premise 1 Program complexity adds higher risk and cost to any project. Premise2: Most OOP programs are more complex than imperative functional equivalents, favouring future proofing over complexity. Therefore Most OOP programs are higher risk and more costly than their imperative functional equivalents for any project.. that works. However, Premise 1 Program complexity adds higher risk and cost to any project. Premise2: All OOP programs are more complex than imperative functional equivalents, favouring future proofing over complexity. Therefore All OOP programs are higher risk and more costly than their imperative functional equivalents for any project.. that doesn't since it is unknown whether the absolute premise 2 is sound or not. and the conclusion cannot be proven although we suspect it is false. That's a no then The fact is there is no evidence that code re-use is any more facilitated by OOP than any other programming (I'd love to see some for any language). Yet, it is used as one of (if not the) primary argument for its superiority over others. There are those who believe that the nature of programs and the components that constitute them tends towards disposable code (diametrically opposite to re-use) and, that in terms of cross project, re-use is a welcome side effect rather than a plannable goal . I'm on the fence on that one (although moving closer to "agree"). My experience is that re-usable code is only obtainable in meaningful measures in a single project or variants on that project. I have a very powerful "toolkit" which transcends projects and for each company I obtain a high level of re-use due to similarity between products. But from company to company or client to client there is little that can be salvaged without serious modifications (apart from the toolkit). I don't view it in this way at all. OOP is a tool. OK a very unwieldy and obese tool (in Labview). But it's there non-the-less. As such it's more about choosing the right tool for the job. I tend to only use OOP in LV for encapsulation (if at all). It's a very nice way of doing things like lists or collections (add, remove, sort etc as opposed to an action engine), but find the cons far outweigh the pros for projects that I choose to use Labview on. Projects that lend themselves to OOP at the architecture level are better suited to non data-centric tools IMHO. Heathen You mean not everything "is a" or "has a"? I see this approach most often (and I think many people think that just because it has a class in it....it is object oriented - I don't think you fit into this category though). Pure OOP is very hard (I think) and gets quite incestuous, requiring detailed knowledge of complex interactions across the entire application. I can't disagree that encapsulation is one of the stronger arguments in favour of OOP generally. But in terms of Labview.... that's about it and even then marginally. Indeed. The huge numbers of VI's. The extra tools required to realise it.. The bugs in the OOP core.. The requirement for initialisation before using it... all big advantages (don't rise to that one I'm pulling your leg) OK. Now the discussion starts There are a couple of facets to my statement. I'll introduce them as blood bath discussion continues. Labview has some very, very attractive features. One of those is that, because it is a data-flow paradigm, state information is implicit. A function cannot execute unless all of its inputs are satisfied. Also, it doesn't matter in what order the data arrives at the functions inputs. Execution automatically proceeds once they have. In non-data-flow (imperative) languages. State has to be managed. Functions have to be called in a certain order and under certain conditions to ensure that all inputs are correct when a particular function is executed. OOP was derived from such imperative languages and is designed to manage state. In fact, much of OOP implementation involves the responsibility of the class to be able to manage it's internal state (detail hiding) and managing the state of the class itself (instantiate, initialise, destroy). In this respect an object instance is synonymous to a dynamically launched VI where an "instance" of the vi is launched. A dynamically launched VI breaks the data-flow since now it's inputs and outputs are independent from the main program data-flow and (assuming we want to get the data from the front panel) we are again back to managing when data arrives to each input and in when we read the result (although it's not a "classic use of a dynamically launched VI). A class is the same. If you query a "get" method, do you specifically know that the all the data inputs have been satisfied before calling the method? Do you even know if it has been been instantiated or initialised? The onus is on the programmer to ensure that things are called in the correct order to make sure the class is first instantiated and initialised and additionally that all the required "set" parameters have been called before executing the "Get" method. In "traditional Labview" the placing of the VI on the diagram instantiates and initialises a VI. And the result cannot be read until all inputs have been satisfied. OOP forces emphasis away from the data driving the program, back to the implementation mechanics. So what am I saying here? In a nut-shell, I'm suggesting that in LVOOP, the implicit data-flow nature nature has been broken (it had to be to implement it) and requires re-implementation by the programmer. Ok you may argue at the macro level it is still data flow because it uses wires and LV primitives. But at the class level it isn't. It is a change in paradigm away from data-flow and although there may be benefits to doing this, a lot of the features that "traditional" LV programmers take for granted can be turned into "issues" for non-traditional" programmers. A good example of this is the singleton pattern (or some might say anti-pattern ) Classic labview has no issue with this. A VI laid anywhere, in any diagram, any number of times will only access that one VI with its one data store and each instance must wait for the others to have finished. No need for instance counting mutexes or locking etc. It's in-built. But in an OOP world, we have to write code to do that.
  2. I use a very simple versioning system. Internally (i.e not releases) it's Major.Minor.Revision.Branch.Increment. For releases its Major.Minor.Revision only (svn is up-issued to the release version x.x.x.0.0 at the same time). The emphasis (from left to right) is "From features to bug fixes". Major = Major functional changes/feature additions (adding OS support, lots of new features/changes, well, anything major really ). Minor = Small functional changes (bug fixes, changes to existing features and maybe a new feature or two. Revision = Bug fix only release. The aim is to only have Major and Minor releases Of course, there is a big subjective grey area in deciding how many changes/features etc warrants a major or minor increment but roughly above 20% increase in code base or over 40% of the code base affected by any changes would definitely be considered major..
  3. Easy tiger! I'll have a word with Cat
  4. That's looking pretty slick You've also given me an idea for another use case for my little sqlite project
  5. Yup. Dlls are a nasty business. I avoid them when I can.. I presume it is a dll written in C or C++? For *char (i.e name) you can use a "String" Type with the "C String Pointer" Format. Int will depend on how the dll was compiled (32 bit or 64 bit). An int compiled for an x32 will be I32 Integer. For 64 bit it might be I32 or I64 (it's up to whoever compiled it). Void is just an untyped pointer to something (could be a byte (U8), 2 bytes (int32), 4 byte (Int 64) a double (8 bytes) and so on). So you have to know the size of that something. If it is an integer, you need to know if it is 64 bit or a 32 bit. If it is a double (floating point) then you need to choose double (it will probably be be 8 bytes - I've never seen a 4 byte one in a C/C++ dll). Then choose "Pointer To Value" If you look at the function prototype, it will show you what the C calling format is for your function. By selecting the appropriate drop downs; replace "Int" with Int32 or Int64 (depending on what it was compiled with). So your Labview function prototype might look something like int32_t xiaSetAcquisitionValue(int32_t detChan, CStr Name, int32_t *Value); or if the value parameter is a floating point number then int32_xiaSetAcquisitionValue(int32_t detChan, CStr Name, double *Value); If Value parameter can be any type depending on the "Name" that you are setting, then you can use the "Adapt To Type" and define it in your labview code for each call by just wiring a U32, Dbl or whatever.
  6. Somehow I figure you for the "cuddlier" type rather than cuter
  7. I think you've been fed the hype intravenously. That is certainly the OOP mantra. But no evidence has ever been proffered to support this kind of statement (you have a link?). Sure. New thread? Ya think people will talk I would start off with explaining why typdefs (in labview) provide single point maintenance and expand into why OOP breaks in-built labview features and why many of the OOP solutions are to problems of it's own making Maybe I should bring my old signature back
  8. This is "traditional" labview we are talking about. If you are suggesting to never use typdefs (because it creates dependencies) then I will take you to task on that. But I think "most" traditional labview programmers would use this method without all the complexity and overhead of conversions to/from variants.
  9. Does that mean you get 3 salaries since you're obviously doing the work of 3 people
  10. I think most people would have used a cluster (usually with a typedef) and just unbundled. That would also work for the OP
  11. I only see a huge CPU jump when not decimating, so its more like 333 points (or 333 redraws/sec). If you think 20% is bad, wait until you've tried the vision stuff
  12. If you had followed the link on the page that Asbo gave you you would have found
  13. Nahh. Both demonstrate the same point, but yours is neater
  14. lol. Only 3 mins difference this time That'll teach me to watch TV whilst posting
  15. If you are asking what I think you are asking then you have missed the fact that shift registers are "growable". You can select a shift register and "drag" out more nodes.
  16. I'd choose an in-door court But then I plan for failure I don't think there are many companies that would expect you to produce code without the tools to do it. Which had me wondering why the OP is being put in this position in the first place. Or indeed, why the question has even arisen. It's like asking a chauffeur to drive you to the ball with no car But it's not a case of "stealing" or won't invest. It's a case of not investing until you absolutely have too (different budgets, different invoice dates, cash flow). And unfortunately, its the bean counters' that run companies now-a-days and an accountant doesn't give 2 hoots if you have to make an aeroplane with a fork and a piece of bamboo, Just as long as you do and his figures add up at the end of it.
  17. This can backfire quite spectacularly. My experience has been that a companies( generally) won't spend any money unless they really, really have to. If there is a legitimate way to use labview without paying (either long-term or short term) then it is usually argued by the bean counters' that it is not a necessity to buy now and once you have used the goodwill, then the evaluation...they will consider it again (you only make it easier for them to delay). I've always found it far better to argue that the project cannot proceed at all and is at risk unless a licence is bought, and, although an evaluation is available, it has already been used during the feasibility phase. It can put you in an uncomfortable position where the NI rep is pressing you for an order, but the accounts dept are dragging their feet (you are after all only one of many requirements for monies) and you are piggy in the middle.
  18. Indeed. But our topologies (and philosophies) are entirely different. Ahhh. I see what your getting at now. Yes. they will be different because you have to redraw more data, more of the control and more often.. It's well known that redraws are expensive (put a control over your graph or make them all twice as large and see what happens). And it is not just Labview. The major difference is in what and how much has to be updated. If you (for example) turn off the scaling (visibility) and keep the data-window the same size, you will see that the CPU use drops a little when it is scrolling. Presumably this is because it is no longer has to redraw the entire control; only the data window (just guessing). You are much better off updating a graphical display in "chunks" so you need to refresh less often. After all, its only to give the user something to look at whilst drinking coffee Humans can only process so much graphical data (and only so much data can be represented due to pixel size) so there is no need to show every data point of a 10,000 point graph. It's meaningless- It just looks like noise! But I bet you want the user to be able to zoom in and out eh?
  19. Probably. I don't use un-named queues since I have a single VI that wraps the queue primitives and ensures there are no resource leaks (and I don't like all the wires going everywhere ) Are you still getting differences in the graphs? When I run your example, I see no difference in CPU usage between scrolling and not (~5-10%) although I haven't looked at all the "Questions"
  20. I think so. Yes the bottom loop is free-running. Put an indicator on the front panel wired to the difference between 2 "GetTickCount" VIs. You will see that it is pretty much "0". You are in fact putting all the data onto 1 un-named queue. So, every 3 ms you are adding 4 lots of data, but you don't know which order the bottom de-queues execute in or, indeed, which order they were originally placed (but that's another issue). Your time-out is is exactly the same (3 ms) so if there are 4 pieces of data on the queue, then everything is fine. However this is unlikely since the producer and consumer are not synchronised. If a timeout occurs; you still increment your counter so, due to jitter, sometimes you are reading data, and sometimes you are just timing out.....but.... you still only display the data (and add it to the history) in case 19. And although you will read it next time around, you don't know on which graph it will be. You have a race condition between data arriving and timing out. Does that make sense?
  21. Not quite.. Here are the licensing options. They are quite clear.
  22. Haven't had time to go into all the above. But have noticed that your bottom loop (dequeue) is free running (0ms between iterations). I get a very stuttery update (just ran it straight out of the box). Changing the time-out (deleting the local and hard-wiring) to, say 100ms, and everything is happy and the dequeue loop updates at 3ms. I imagine there's a shed-load of some time-outs going on which is causing you to miss updates (not sure of the mechanism at the moment)..
  23. Datatypes in SQLite Version 3 If you give me a a specific example of what you mean. Maybe I can help more (but read the link first)
  24. I've never found an easy way to do do this programmatically within LV. I use one of two techniques. 1. Create a function in the DLL called Get_Ver which will return the the dll version and LV version used (I don't use this much any more) 2. When you build the dll, put the LV version in the description section. It won't help you for the current DLL. But you shouldn't have a problem in the future.
  25. Sin? Noooo. But this is an interesting one. It will fit on 1 screen. Just do a "Clean Up Diagram" with shorter variable names and you're pretty much there. But that's not the interesting bit There is a large number of operations that are identical. Identical operations easily be encapsulated into sub-vis. This not only shrinks your diagram, but also segments it into easy to digest chunks of code. Consider this piece of code.... this is replicated no less that 6 times. I would be tempted to do something like this : It achieves several benedits. 1. it is more compact 2. It is easier to identify repeated functions.. 3. It reduces wiring errors 4. It gives more control (you can, for example make it a "Subroutine" for faster execution) 5. You can re-use it . There are many more places within this piece of code where it could benefit from sub-vis. The net result would be that it will probably fit on 1/4 of a screen and be a lot easier to read/maintain. There are also a few areas that would benefit from "For Loops" too.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.