Jump to content

ShaunR

Members
  • Posts

    4,849
  • Joined

  • Days Won

    292

Everything posted by ShaunR

  1. I would spend more time on the VI functionality if I were you. Pretty pictures won't make it work any better.
  2. You are better off having separate files for each language. Then you can have an ascii key (which you can search) and place the real string that into the control. It also enables easy switching just by specifying a file name. eg. msg_1=種類の音を与えます You could take a look at Passa Mak. It doesn't support Unicode, but it solves a lot of the problems you will come across. Alternatively you could use a database so that you can use sql queries instead of string functions to get your strings (its labviews string functions/parsing that cause most of the problems).
  3. The config file vi's do not support Unicode . They use ascii operations internally for comparison (no string operations in LV currrently support Unicode) You will have to read it as a standard text file then convert it with the tools above back to ascii.
  4. What would be nice is for the LV project manager to be able to handle (nest?) multiple projects (the way many other IDEs do). Then we could make a project for each target and add them to a main project. (I know we sort of have this with lvlibs. but it's not quite the same thing). Once we had that, each sub-project would just be a branch off the main svn (or Mercurial if you like ) trunk (main project) and we could work on each target in isolation if we wanted to. Unless of course we already can and I haven't figured it out yet.
  5. The implode 1D array implodes (or concatenates rather than separates) each value in the array to a quoted string. The value is. Well. the value (e.g 3.1). the Field is the field name (e.g Time). If you want to set an affinity for a field (REAL, INTEGER, TEXT et al.), that is achieved when you create the fields with the "Create" table. Anything can be written to any field type and SQLite will convert it to the defined affinity for storage. The API always writes and reads as a string (it uses a string basically like a variant), but SQLite converts it automagically.
  6. This is what I consider "most" labview programmers (regardless of traditional or not) to be and the analogy I've used before is between "pure" mathematicians and "applied" mathematicians. Pure mathematicians are more interested in the elegance of the maths and its conceptual aspects as opposed to "applied" which are more interested in how it relates to real-world application. Is one a better mathematician than the other? I think not. It's purely an emphasis. Both need to have an intrinsic understanding of the maths. I think most Labview programmers by the very nature of the programs they write and the suitability of the language to those programs are "applied" programmers, but that doesn't mean they don't have an intrinsic understanding of programming or indeed how to architecture it. Nice, pragmatic and modest post. I think many people are coming to this sort of conclusion in the wake of the original hype. As indeed happened to OOP in C++ more than 10 years ago. It's actually very rare to see a pure OOP application in any language. Most people (from my experience) go for encapsulation then use structured techniques. You've actually hit on the one thing "traditional" labview cannot do (or simulate). Run-time polymorphism (enabled by dynamic dispatch). However. there are very few cases that it is required (or desirable) unless, of course, you are using LVOOP. Then it's a must have to circumvent labviews requirement for design-time strict typing (another example of breaking Labviews in-built features to enable LVOOP). Well. That's how it seems to me at least. There may be some other reason, but in other languages you don't have "dynamic dispatch" type function arguments. But aside from that. I never use waterfall (for software at least). I find an iterative approach (or "agile" as it is called now-a-days) much more useful and manageable. Sure the whole project (including hardware, mechanics, etc) will be waterfall (it's easier for management to see progress and you have to wait for materials) but within that at the macro level the software will be iterative with release milestones in the waterfall. As a result, the software is much quicker to react to changes than the other disciplines which means that the software is never a critical path until the end-of-project test phase (systems testing - you can't test against all the hardware until they've actually built it). At that point, however, the iterative cycle just gets faster with smaller changes since by that time you are (should be ) reacting to purely hardware integration problems so it's fairly straight forward to predict.
  7. Does keep having to put a tru boolean down annoy you as much as it does me?
  8. Does anyone use RowID a lot? I'm finding that the more I use the API the more annoyed I am at having to always put a True boolean for No RowID (might be just my use cases). I'm thinking about inverting it so that the default is no rowid. Would this cause lots of issues to people already using it (if any...lol)
  9. Only on LAVA The "Mantra" I was referring to isthe comment about re-use and inflexibility I disagree vehemently with what you are saying here (maybe because you've switched the subject to "Most LV users/programmers"?). I'm sure it is not your intention (it's certainly out of character), but it comes across as most LV programmers and Labview programmers ALONE , are somehow inferior, lack the ability to design, plan and execute programming tasks. I thought that being on LAVA you would realise this is simply not true. There are good and bad programmers (yes even those with formal training) and not only in Labview. Whether a programmer (in any language) lacks forethought and analysis/problem solving skills is more a function of persona and experience than the programming language they employ. It comes across that you view Labview as an environment that "real" programmers wouldn't use. It's verging on "elitist". Most traditional LV programmers......OK. Lets qualify this. I consider "traditional" programmers as those who do not employ LVOOP technologies. I also use the term "Classical" on occasion too. Traditional LV programmers would, for example, use an "action engine" (will he bite? . I know it's one of your favourites) in preference to a class to achieve similar perceived behaviour. But. On to the my type-def proposal. It's not a "proper" solution? How so? How does the mere fact of using a "type-def preclude it being one or indeed re-usable?. Using a typedef with queues is a well established technique. For example it's used in the "Asynchronous Message Commucication Reference Library", You are right. in one aspect The OP on the other thread is considering issues and he nay have goals that perhaps not even you can predict . But that does not mean it it should not be considered. Without the use of OOP, What other methods would you proffer (we are talking about "Most Traditional Labview Programmers after all). For deductive reasoning,it must be valid, sound and impossible for the result to be false when the premises are true (which in my mind make the exercise pointless anyway).The conclusion can be false (it might be possible) since it can be neither proved nor disproved and premise 2 is not sound since it is an assumption and cannot be proven to be true. They also don't work well for generalisations because "generally" a premise may be true, but not exclusively. Premise 1 Program complexity adds higher risk and cost to any project. Premise2: Most OOP programs are more complex than imperative functional equivalents, favouring future proofing over complexity. Therefore Most OOP programs are higher risk and more costly than their imperative functional equivalents for any project.. that works. However, Premise 1 Program complexity adds higher risk and cost to any project. Premise2: All OOP programs are more complex than imperative functional equivalents, favouring future proofing over complexity. Therefore All OOP programs are higher risk and more costly than their imperative functional equivalents for any project.. that doesn't since it is unknown whether the absolute premise 2 is sound or not. and the conclusion cannot be proven although we suspect it is false. That's a no then The fact is there is no evidence that code re-use is any more facilitated by OOP than any other programming (I'd love to see some for any language). Yet, it is used as one of (if not the) primary argument for its superiority over others. There are those who believe that the nature of programs and the components that constitute them tends towards disposable code (diametrically opposite to re-use) and, that in terms of cross project, re-use is a welcome side effect rather than a plannable goal . I'm on the fence on that one (although moving closer to "agree"). My experience is that re-usable code is only obtainable in meaningful measures in a single project or variants on that project. I have a very powerful "toolkit" which transcends projects and for each company I obtain a high level of re-use due to similarity between products. But from company to company or client to client there is little that can be salvaged without serious modifications (apart from the toolkit). I don't view it in this way at all. OOP is a tool. OK a very unwieldy and obese tool (in Labview). But it's there non-the-less. As such it's more about choosing the right tool for the job. I tend to only use OOP in LV for encapsulation (if at all). It's a very nice way of doing things like lists or collections (add, remove, sort etc as opposed to an action engine), but find the cons far outweigh the pros for projects that I choose to use Labview on. Projects that lend themselves to OOP at the architecture level are better suited to non data-centric tools IMHO. Heathen You mean not everything "is a" or "has a"? I see this approach most often (and I think many people think that just because it has a class in it....it is object oriented - I don't think you fit into this category though). Pure OOP is very hard (I think) and gets quite incestuous, requiring detailed knowledge of complex interactions across the entire application. I can't disagree that encapsulation is one of the stronger arguments in favour of OOP generally. But in terms of Labview.... that's about it and even then marginally. Indeed. The huge numbers of VI's. The extra tools required to realise it.. The bugs in the OOP core.. The requirement for initialisation before using it... all big advantages (don't rise to that one I'm pulling your leg) OK. Now the discussion starts There are a couple of facets to my statement. I'll introduce them as blood bath discussion continues. Labview has some very, very attractive features. One of those is that, because it is a data-flow paradigm, state information is implicit. A function cannot execute unless all of its inputs are satisfied. Also, it doesn't matter in what order the data arrives at the functions inputs. Execution automatically proceeds once they have. In non-data-flow (imperative) languages. State has to be managed. Functions have to be called in a certain order and under certain conditions to ensure that all inputs are correct when a particular function is executed. OOP was derived from such imperative languages and is designed to manage state. In fact, much of OOP implementation involves the responsibility of the class to be able to manage it's internal state (detail hiding) and managing the state of the class itself (instantiate, initialise, destroy). In this respect an object instance is synonymous to a dynamically launched VI where an "instance" of the vi is launched. A dynamically launched VI breaks the data-flow since now it's inputs and outputs are independent from the main program data-flow and (assuming we want to get the data from the front panel) we are again back to managing when data arrives to each input and in when we read the result (although it's not a "classic use of a dynamically launched VI). A class is the same. If you query a "get" method, do you specifically know that the all the data inputs have been satisfied before calling the method? Do you even know if it has been been instantiated or initialised? The onus is on the programmer to ensure that things are called in the correct order to make sure the class is first instantiated and initialised and additionally that all the required "set" parameters have been called before executing the "Get" method. In "traditional Labview" the placing of the VI on the diagram instantiates and initialises a VI. And the result cannot be read until all inputs have been satisfied. OOP forces emphasis away from the data driving the program, back to the implementation mechanics. So what am I saying here? In a nut-shell, I'm suggesting that in LVOOP, the implicit data-flow nature nature has been broken (it had to be to implement it) and requires re-implementation by the programmer. Ok you may argue at the macro level it is still data flow because it uses wires and LV primitives. But at the class level it isn't. It is a change in paradigm away from data-flow and although there may be benefits to doing this, a lot of the features that "traditional" LV programmers take for granted can be turned into "issues" for non-traditional" programmers. A good example of this is the singleton pattern (or some might say anti-pattern ) Classic labview has no issue with this. A VI laid anywhere, in any diagram, any number of times will only access that one VI with its one data store and each instance must wait for the others to have finished. No need for instance counting mutexes or locking etc. It's in-built. But in an OOP world, we have to write code to do that.
  10. I use a very simple versioning system. Internally (i.e not releases) it's Major.Minor.Revision.Branch.Increment. For releases its Major.Minor.Revision only (svn is up-issued to the release version x.x.x.0.0 at the same time). The emphasis (from left to right) is "From features to bug fixes". Major = Major functional changes/feature additions (adding OS support, lots of new features/changes, well, anything major really ). Minor = Small functional changes (bug fixes, changes to existing features and maybe a new feature or two. Revision = Bug fix only release. The aim is to only have Major and Minor releases Of course, there is a big subjective grey area in deciding how many changes/features etc warrants a major or minor increment but roughly above 20% increase in code base or over 40% of the code base affected by any changes would definitely be considered major..
  11. That's looking pretty slick You've also given me an idea for another use case for my little sqlite project
  12. Yup. Dlls are a nasty business. I avoid them when I can.. I presume it is a dll written in C or C++? For *char (i.e name) you can use a "String" Type with the "C String Pointer" Format. Int will depend on how the dll was compiled (32 bit or 64 bit). An int compiled for an x32 will be I32 Integer. For 64 bit it might be I32 or I64 (it's up to whoever compiled it). Void is just an untyped pointer to something (could be a byte (U8), 2 bytes (int32), 4 byte (Int 64) a double (8 bytes) and so on). So you have to know the size of that something. If it is an integer, you need to know if it is 64 bit or a 32 bit. If it is a double (floating point) then you need to choose double (it will probably be be 8 bytes - I've never seen a 4 byte one in a C/C++ dll). Then choose "Pointer To Value" If you look at the function prototype, it will show you what the C calling format is for your function. By selecting the appropriate drop downs; replace "Int" with Int32 or Int64 (depending on what it was compiled with). So your Labview function prototype might look something like int32_t xiaSetAcquisitionValue(int32_t detChan, CStr Name, int32_t *Value); or if the value parameter is a floating point number then int32_xiaSetAcquisitionValue(int32_t detChan, CStr Name, double *Value); If Value parameter can be any type depending on the "Name" that you are setting, then you can use the "Adapt To Type" and define it in your labview code for each call by just wiring a U32, Dbl or whatever.
  13. Somehow I figure you for the "cuddlier" type rather than cuter
  14. I think you've been fed the hype intravenously. That is certainly the OOP mantra. But no evidence has ever been proffered to support this kind of statement (you have a link?). Sure. New thread? Ya think people will talk I would start off with explaining why typdefs (in labview) provide single point maintenance and expand into why OOP breaks in-built labview features and why many of the OOP solutions are to problems of it's own making Maybe I should bring my old signature back
  15. This is "traditional" labview we are talking about. If you are suggesting to never use typdefs (because it creates dependencies) then I will take you to task on that. But I think "most" traditional labview programmers would use this method without all the complexity and overhead of conversions to/from variants.
  16. Does that mean you get 3 salaries since you're obviously doing the work of 3 people
  17. I think most people would have used a cluster (usually with a typedef) and just unbundled. That would also work for the OP
  18. I only see a huge CPU jump when not decimating, so its more like 333 points (or 333 redraws/sec). If you think 20% is bad, wait until you've tried the vision stuff
  19. If you had followed the link on the page that Asbo gave you you would have found
  20. Nahh. Both demonstrate the same point, but yours is neater
  21. lol. Only 3 mins difference this time That'll teach me to watch TV whilst posting
  22. If you are asking what I think you are asking then you have missed the fact that shift registers are "growable". You can select a shift register and "drag" out more nodes.
  23. I'd choose an in-door court But then I plan for failure I don't think there are many companies that would expect you to produce code without the tools to do it. Which had me wondering why the OP is being put in this position in the first place. Or indeed, why the question has even arisen. It's like asking a chauffeur to drive you to the ball with no car But it's not a case of "stealing" or won't invest. It's a case of not investing until you absolutely have too (different budgets, different invoice dates, cash flow). And unfortunately, its the bean counters' that run companies now-a-days and an accountant doesn't give 2 hoots if you have to make an aeroplane with a fork and a piece of bamboo, Just as long as you do and his figures add up at the end of it.
  24. This can backfire quite spectacularly. My experience has been that a companies( generally) won't spend any money unless they really, really have to. If there is a legitimate way to use labview without paying (either long-term or short term) then it is usually argued by the bean counters' that it is not a necessity to buy now and once you have used the goodwill, then the evaluation...they will consider it again (you only make it easier for them to delay). I've always found it far better to argue that the project cannot proceed at all and is at risk unless a licence is bought, and, although an evaluation is available, it has already been used during the feasibility phase. It can put you in an uncomfortable position where the NI rep is pressing you for an order, but the accounts dept are dragging their feet (you are after all only one of many requirements for monies) and you are piggy in the middle.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.