Jump to content

ShaunR

Members
  • Posts

    4,882
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. Jut a note. This property node only tells you if the VI is in front of other VIs. If you click on (say) your web browser, it will still say it is front most.
  2. Lots of excellent points here. I'll break them up into different posts since it's getting rather tedious reading my OWN posts in 1 go 21 again? Happy b'day Mrs Daklu I don't think this is so. To extend a component that uses a typedef it is just a matter of selecting "create sub-vi" and then "create constant" or "Create control/indicator". Then the new vi inherits all the original components functionality and you are free to add more if you desire (or hide). Well. You do. In classical Labview, loading the top level application loads ALL vis into memory. (Unless it is dynamically loaded). A number of points here: 1. Adding data (a control?) to a typedef cluster won't break anything (only clusters use bundle and unbundle). All previous functionality is preserved, but the new data will not be used until you write some code to do it. The previso here (as you say) is to use "bundle/unbundle by name" (see point #3) and not straight bundling or array to cluster functions (which have a fixed number of outputs) . The classic use, however, is a typedef'd enumerated control which can be used by various case structures to switch operations and are impervious to re-ordering or renaming of the enum contents. 2. Renaming may or may not break (as you state). If it's a renamed enumeration, string, boolean etc (or base type as I call them). Then nothing changes. If it's an element in a cluster, then it will. 3. I've never seen a case (nor can I see how) where an "unbundle/bundle by name" has ever chosen the wrong element in a typedef'd cluster or indeed a normal cluster (I presume you are talking about clusters because any control can be typedef'd). A straight unbundle/bundle, I can understand (they are index based) but that's nothing to do with typedefs ( I never use them a) because of this and b) by-name improves readability) An example perhaps? I think a class is a bit more than just a "super" typedef. In fact, I don't see them as the same at all. A typedef is just a control that has this special ability to propagate it's changes application wide. A class is a "template" (it doesn't exist until it is instantiated) for a module (a nugget of code if you like) . If you do see classes and typdefs as synonymous, then that's actually a lot of work for very little gain. Each new addition to a cluster (classes data member?) would require 2 new VIs (methods). 10 elements, 20 VI's Contrast this with adding a new element to a type-def'd cluster. No new VIs. 10 elements, 1 control (remember my single-point maintenance comment?) . Immutable objects? You mean a "constant" right? It''d probably be better than my internet connection recently. The other night I had 20 disconnects More to follow....
  3. Let me know how you get on. I've been meaning to re-visit it, but it dos everything I need it to at the moment so couldn't find an excuse
  4. Why are they password protected? Will we be seeing proper unicode support soon
  5. This is the sort of thing Dispatcher was designed for. Each cell would have a dispatcher and the manager simply subscribes to the test cell that he wishes to see. You can either just send an image of the FP for a single cell, and/or write a little bit of code to amalgamate the bits they are interested in from multiple cells to make process overviews. Of course it would require the LV run-time on the managers machine.
  6. It works quite well for FP stuff. But you can't localise dialogues (for any app and error messages).
  7. Reusing pieces of code is like picking off sentences from other people's stories and trying to make a magazine article.~Bob Frankston

  8. I would spend more time on the VI functionality if I were you. Pretty pictures won't make it work any better.
  9. You are better off having separate files for each language. Then you can have an ascii key (which you can search) and place the real string that into the control. It also enables easy switching just by specifying a file name. eg. msg_1=種類の音を与えます You could take a look at Passa Mak. It doesn't support Unicode, but it solves a lot of the problems you will come across. Alternatively you could use a database so that you can use sql queries instead of string functions to get your strings (its labviews string functions/parsing that cause most of the problems).
  10. The config file vi's do not support Unicode . They use ascii operations internally for comparison (no string operations in LV currrently support Unicode) You will have to read it as a standard text file then convert it with the tools above back to ascii.
  11. What would be nice is for the LV project manager to be able to handle (nest?) multiple projects (the way many other IDEs do). Then we could make a project for each target and add them to a main project. (I know we sort of have this with lvlibs. but it's not quite the same thing). Once we had that, each sub-project would just be a branch off the main svn (or Mercurial if you like ) trunk (main project) and we could work on each target in isolation if we wanted to. Unless of course we already can and I haven't figured it out yet.
  12. The implode 1D array implodes (or concatenates rather than separates) each value in the array to a quoted string. The value is. Well. the value (e.g 3.1). the Field is the field name (e.g Time). If you want to set an affinity for a field (REAL, INTEGER, TEXT et al.), that is achieved when you create the fields with the "Create" table. Anything can be written to any field type and SQLite will convert it to the defined affinity for storage. The API always writes and reads as a string (it uses a string basically like a variant), but SQLite converts it automagically.
  13. This is what I consider "most" labview programmers (regardless of traditional or not) to be and the analogy I've used before is between "pure" mathematicians and "applied" mathematicians. Pure mathematicians are more interested in the elegance of the maths and its conceptual aspects as opposed to "applied" which are more interested in how it relates to real-world application. Is one a better mathematician than the other? I think not. It's purely an emphasis. Both need to have an intrinsic understanding of the maths. I think most Labview programmers by the very nature of the programs they write and the suitability of the language to those programs are "applied" programmers, but that doesn't mean they don't have an intrinsic understanding of programming or indeed how to architecture it. Nice, pragmatic and modest post. I think many people are coming to this sort of conclusion in the wake of the original hype. As indeed happened to OOP in C++ more than 10 years ago. It's actually very rare to see a pure OOP application in any language. Most people (from my experience) go for encapsulation then use structured techniques. You've actually hit on the one thing "traditional" labview cannot do (or simulate). Run-time polymorphism (enabled by dynamic dispatch). However. there are very few cases that it is required (or desirable) unless, of course, you are using LVOOP. Then it's a must have to circumvent labviews requirement for design-time strict typing (another example of breaking Labviews in-built features to enable LVOOP). Well. That's how it seems to me at least. There may be some other reason, but in other languages you don't have "dynamic dispatch" type function arguments. But aside from that. I never use waterfall (for software at least). I find an iterative approach (or "agile" as it is called now-a-days) much more useful and manageable. Sure the whole project (including hardware, mechanics, etc) will be waterfall (it's easier for management to see progress and you have to wait for materials) but within that at the macro level the software will be iterative with release milestones in the waterfall. As a result, the software is much quicker to react to changes than the other disciplines which means that the software is never a critical path until the end-of-project test phase (systems testing - you can't test against all the hardware until they've actually built it). At that point, however, the iterative cycle just gets faster with smaller changes since by that time you are (should be ) reacting to purely hardware integration problems so it's fairly straight forward to predict.
  14. Does keep having to put a tru boolean down annoy you as much as it does me?
  15. Does anyone use RowID a lot? I'm finding that the more I use the API the more annoyed I am at having to always put a True boolean for No RowID (might be just my use cases). I'm thinking about inverting it so that the default is no rowid. Would this cause lots of issues to people already using it (if any...lol)
  16. Only on LAVA The "Mantra" I was referring to isthe comment about re-use and inflexibility I disagree vehemently with what you are saying here (maybe because you've switched the subject to "Most LV users/programmers"?). I'm sure it is not your intention (it's certainly out of character), but it comes across as most LV programmers and Labview programmers ALONE , are somehow inferior, lack the ability to design, plan and execute programming tasks. I thought that being on LAVA you would realise this is simply not true. There are good and bad programmers (yes even those with formal training) and not only in Labview. Whether a programmer (in any language) lacks forethought and analysis/problem solving skills is more a function of persona and experience than the programming language they employ. It comes across that you view Labview as an environment that "real" programmers wouldn't use. It's verging on "elitist". Most traditional LV programmers......OK. Lets qualify this. I consider "traditional" programmers as those who do not employ LVOOP technologies. I also use the term "Classical" on occasion too. Traditional LV programmers would, for example, use an "action engine" (will he bite? . I know it's one of your favourites) in preference to a class to achieve similar perceived behaviour. But. On to the my type-def proposal. It's not a "proper" solution? How so? How does the mere fact of using a "type-def preclude it being one or indeed re-usable?. Using a typedef with queues is a well established technique. For example it's used in the "Asynchronous Message Commucication Reference Library", You are right. in one aspect The OP on the other thread is considering issues and he nay have goals that perhaps not even you can predict . But that does not mean it it should not be considered. Without the use of OOP, What other methods would you proffer (we are talking about "Most Traditional Labview Programmers after all). For deductive reasoning,it must be valid, sound and impossible for the result to be false when the premises are true (which in my mind make the exercise pointless anyway).The conclusion can be false (it might be possible) since it can be neither proved nor disproved and premise 2 is not sound since it is an assumption and cannot be proven to be true. They also don't work well for generalisations because "generally" a premise may be true, but not exclusively. Premise 1 Program complexity adds higher risk and cost to any project. Premise2: Most OOP programs are more complex than imperative functional equivalents, favouring future proofing over complexity. Therefore Most OOP programs are higher risk and more costly than their imperative functional equivalents for any project.. that works. However, Premise 1 Program complexity adds higher risk and cost to any project. Premise2: All OOP programs are more complex than imperative functional equivalents, favouring future proofing over complexity. Therefore All OOP programs are higher risk and more costly than their imperative functional equivalents for any project.. that doesn't since it is unknown whether the absolute premise 2 is sound or not. and the conclusion cannot be proven although we suspect it is false. That's a no then The fact is there is no evidence that code re-use is any more facilitated by OOP than any other programming (I'd love to see some for any language). Yet, it is used as one of (if not the) primary argument for its superiority over others. There are those who believe that the nature of programs and the components that constitute them tends towards disposable code (diametrically opposite to re-use) and, that in terms of cross project, re-use is a welcome side effect rather than a plannable goal . I'm on the fence on that one (although moving closer to "agree"). My experience is that re-usable code is only obtainable in meaningful measures in a single project or variants on that project. I have a very powerful "toolkit" which transcends projects and for each company I obtain a high level of re-use due to similarity between products. But from company to company or client to client there is little that can be salvaged without serious modifications (apart from the toolkit). I don't view it in this way at all. OOP is a tool. OK a very unwieldy and obese tool (in Labview). But it's there non-the-less. As such it's more about choosing the right tool for the job. I tend to only use OOP in LV for encapsulation (if at all). It's a very nice way of doing things like lists or collections (add, remove, sort etc as opposed to an action engine), but find the cons far outweigh the pros for projects that I choose to use Labview on. Projects that lend themselves to OOP at the architecture level are better suited to non data-centric tools IMHO. Heathen You mean not everything "is a" or "has a"? I see this approach most often (and I think many people think that just because it has a class in it....it is object oriented - I don't think you fit into this category though). Pure OOP is very hard (I think) and gets quite incestuous, requiring detailed knowledge of complex interactions across the entire application. I can't disagree that encapsulation is one of the stronger arguments in favour of OOP generally. But in terms of Labview.... that's about it and even then marginally. Indeed. The huge numbers of VI's. The extra tools required to realise it.. The bugs in the OOP core.. The requirement for initialisation before using it... all big advantages (don't rise to that one I'm pulling your leg) OK. Now the discussion starts There are a couple of facets to my statement. I'll introduce them as blood bath discussion continues. Labview has some very, very attractive features. One of those is that, because it is a data-flow paradigm, state information is implicit. A function cannot execute unless all of its inputs are satisfied. Also, it doesn't matter in what order the data arrives at the functions inputs. Execution automatically proceeds once they have. In non-data-flow (imperative) languages. State has to be managed. Functions have to be called in a certain order and under certain conditions to ensure that all inputs are correct when a particular function is executed. OOP was derived from such imperative languages and is designed to manage state. In fact, much of OOP implementation involves the responsibility of the class to be able to manage it's internal state (detail hiding) and managing the state of the class itself (instantiate, initialise, destroy). In this respect an object instance is synonymous to a dynamically launched VI where an "instance" of the vi is launched. A dynamically launched VI breaks the data-flow since now it's inputs and outputs are independent from the main program data-flow and (assuming we want to get the data from the front panel) we are again back to managing when data arrives to each input and in when we read the result (although it's not a "classic use of a dynamically launched VI). A class is the same. If you query a "get" method, do you specifically know that the all the data inputs have been satisfied before calling the method? Do you even know if it has been been instantiated or initialised? The onus is on the programmer to ensure that things are called in the correct order to make sure the class is first instantiated and initialised and additionally that all the required "set" parameters have been called before executing the "Get" method. In "traditional Labview" the placing of the VI on the diagram instantiates and initialises a VI. And the result cannot be read until all inputs have been satisfied. OOP forces emphasis away from the data driving the program, back to the implementation mechanics. So what am I saying here? In a nut-shell, I'm suggesting that in LVOOP, the implicit data-flow nature nature has been broken (it had to be to implement it) and requires re-implementation by the programmer. Ok you may argue at the macro level it is still data flow because it uses wires and LV primitives. But at the class level it isn't. It is a change in paradigm away from data-flow and although there may be benefits to doing this, a lot of the features that "traditional" LV programmers take for granted can be turned into "issues" for non-traditional" programmers. A good example of this is the singleton pattern (or some might say anti-pattern ) Classic labview has no issue with this. A VI laid anywhere, in any diagram, any number of times will only access that one VI with its one data store and each instance must wait for the others to have finished. No need for instance counting mutexes or locking etc. It's in-built. But in an OOP world, we have to write code to do that.
  17. I use a very simple versioning system. Internally (i.e not releases) it's Major.Minor.Revision.Branch.Increment. For releases its Major.Minor.Revision only (svn is up-issued to the release version x.x.x.0.0 at the same time). The emphasis (from left to right) is "From features to bug fixes". Major = Major functional changes/feature additions (adding OS support, lots of new features/changes, well, anything major really ). Minor = Small functional changes (bug fixes, changes to existing features and maybe a new feature or two. Revision = Bug fix only release. The aim is to only have Major and Minor releases Of course, there is a big subjective grey area in deciding how many changes/features etc warrants a major or minor increment but roughly above 20% increase in code base or over 40% of the code base affected by any changes would definitely be considered major..
  18. That's looking pretty slick You've also given me an idea for another use case for my little sqlite project
  19. Yup. Dlls are a nasty business. I avoid them when I can.. I presume it is a dll written in C or C++? For *char (i.e name) you can use a "String" Type with the "C String Pointer" Format. Int will depend on how the dll was compiled (32 bit or 64 bit). An int compiled for an x32 will be I32 Integer. For 64 bit it might be I32 or I64 (it's up to whoever compiled it). Void is just an untyped pointer to something (could be a byte (U8), 2 bytes (int32), 4 byte (Int 64) a double (8 bytes) and so on). So you have to know the size of that something. If it is an integer, you need to know if it is 64 bit or a 32 bit. If it is a double (floating point) then you need to choose double (it will probably be be 8 bytes - I've never seen a 4 byte one in a C/C++ dll). Then choose "Pointer To Value" If you look at the function prototype, it will show you what the C calling format is for your function. By selecting the appropriate drop downs; replace "Int" with Int32 or Int64 (depending on what it was compiled with). So your Labview function prototype might look something like int32_t xiaSetAcquisitionValue(int32_t detChan, CStr Name, int32_t *Value); or if the value parameter is a floating point number then int32_xiaSetAcquisitionValue(int32_t detChan, CStr Name, double *Value); If Value parameter can be any type depending on the "Name" that you are setting, then you can use the "Adapt To Type" and define it in your labview code for each call by just wiring a U32, Dbl or whatever.
  20. Somehow I figure you for the "cuddlier" type rather than cuter
  21. I think you've been fed the hype intravenously. That is certainly the OOP mantra. But no evidence has ever been proffered to support this kind of statement (you have a link?). Sure. New thread? Ya think people will talk I would start off with explaining why typdefs (in labview) provide single point maintenance and expand into why OOP breaks in-built labview features and why many of the OOP solutions are to problems of it's own making Maybe I should bring my old signature back
  22. This is "traditional" labview we are talking about. If you are suggesting to never use typdefs (because it creates dependencies) then I will take you to task on that. But I think "most" traditional labview programmers would use this method without all the complexity and overhead of conversions to/from variants.
  23. Does that mean you get 3 salaries since you're obviously doing the work of 3 people
  24. I think most people would have used a cluster (usually with a typedef) and just unbundled. That would also work for the OP
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.