-
Posts
4,942 -
Joined
-
Days Won
308
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
Ahhhhh. I see what you are getting at now. The light has flickered on (I have a neon one ) I must admit, I did the usual. Identify the problem and find a quicker way to replicate it (I thought you were banging on about an old "feature" and I knew how to replicate it ). That's why I didn't follow your procedure exactly for the vid (I did the first few times to see what the effect was and thought "Ahhh, that old chestnut"). But having done so I would actually say the class was easier since I didn't even have to have any VI's open So it really is a corner of a corner case Have you raised the Car yet? But it does demonstrate (as you rightly say) a little understood effect.. I've been skirting around it for so long; I'd forgotten it. I didn't understand why it did it (never bothered like with so many nuancies in LV), I only knew it could happen and modified my workflow so it didn't happen to me But in terms of effort defending against it. Well. How often does it happen? I said before, I've never seen it (untrue of course given what I've said before) so is it something to get out nickers in a twist about? A purist would say yes. An accountant would say "how much does it cost"? Is awareness enough (in the same way I'm fidgety about windows indexing and always turn it off?). What is the trade off between detecting that bug and writing lots of defensive code or test cases that may be prone bugs themselves? I think it's the developers call. If it affects both LVOOP and traditional labview then it should be considered a bug or at the very least have a big red banner in the help Still going to use typedefs though
-
You are probably better off logging to a file since you will have a huge dataset.
-
I'm not sure I would agree that OOP is still evolving (some say its a mature methodology). But I would agree LVOOP is probably unfinished. The question is as we are already 10 years behind the others, will it be finished before the next fad Since I think that we are due for another radical change in program designing (akin to text vs graphical was or structured vs OOP). It seems unlikely. As for a plug-in way of invoking AEs. Just dynamically load them. If you make the call something like "Move.Drive Controller" or "Drive Controller.Move" (depending on how you like it), strip the "Move and use if for the action and load your "Drive Controller".vi. But for me, compile time safety is a huge plus for using labview.
-
No it doesn't defeat the object of typedefs. Typedef'd clusters (since you are so hung-up on just clusters ) are typically used to bundle and unbundle controls/indicators so compound/complex controls so we can have nice neat wires to and from VIs . Additionally they can be used to add clarity an easy method to select individual components of the compund control. The benefit as opposed to normal clusters is that a change propagates through the entire application so there is no need to go to every VI and modify a control/indicator just because you change the cluster. I (personally) have never uses typedef'd constants (or ever seen them used the way you are trying to use them) except as a datatype for bundle by name. As I said previously, it is a TypeDef not a Datadef. Well. I'm not sure what you are seeing. Here is a vid of what happens when I do the same. http://www.screencast.com/users/Imp0st3r/folders/Jing/media/6d552790-5293-4b47-85bc-2fcb1402b085 All the names are John (which I think was the point). Sure the bundles change, so now the 0th container is labeled "LastName". But its just a label for the container (could have been z5ww2qa). But because you are imposing ordered meaning on the data you are supplying, I think you are expecting it to read your intentions and choose an appropriate label to match your artificially imposed meaningful data.You will have noticed that when you change the cluster order (again something I don't think most people do - but valid), the order within the cluster changed too (Lastname is now at the top). So what you have done is change into which container the values are stored. they are both still stored. They will all be taken out of the container that you stored them in. Only you are now storing the first name (data definition) in the last name (container). If you are thinking this will not happen with your class....then how about this?l. http://www.screencast.com/users/Imp0st3r/folders/Jing/media/672c5406-a56d-4c7a-a177-ab31a3c0cd15 What problem? I think you are seeing a typedef as more than it really is and you have probably found an edge case which seems to be an issue for your usage/expectation. It is just a control. It even has a control extension. It's no more an equivalent to a class than it is to a VI. The fact you are using a bundle/unbundle is because you are using a compound control (cluster) andt has little to do with typedefs. Making such a control into a typedef just means we don't have to go to every VI front panel and modify it manually when we change the cluster. Yup. And if one doesn't exist, I write one (or at least title a document "Design Specification" ) by interrogating the customer But mainly our projects are entire systems and you need one to prove that the customers requirements have been met by the design. Seat-of-yer-pants programming only works with a lot of experience and a small amount of code. It's nothing to do with in memory or not (I don't think). What you are seeing is the result of changing the order of the components within the cluster. An enum isn't a compound component so there is no order associated. Nope. The source is in SVN. OK you have to have a copy of the VIs on the machine you are working on in the same way that you have to have the class vis present to be able to use them. So I'm not really sure what you are getting at here. A module that might expose a typedef would be an action engine. I have a rather old drive controller, for example, that has an enumerated typedef with Move In, Move Out, Stop, Pause, Home. If I were to revisit it then I would probably go for a polymorphic VI instead purely because it would only expose the controls for that particular function (you don't need a distance parm for Home or Stop for example) rather than just ignoring certain inputs.But its been fine for 3 years and if it "'aint broke, don't fix it" I suppose. But it's not used like that and I cannot think of a situation where you would want to (what would be the benefit?) Its used either as a control, or as a "Type Definition" for a bundle-by-name. It's a bit like laying down a queue reference constant. Sure you can. But why would you? Unless of course you want to impose "Type" or cast it. Maybe I don't. But I do know "by-val" doesn't mean it's "data-flow" any more than using a "class" means "object oriented". Like you said. It's up to the programmer. It's just that the defaults are different. In classic labview, the default is implicit state with single instances. In LVOOP its multiple instance with managed state. Either can be made to do the other. It's just the amount of work to turn one into the other. Well that's how it seems to a heathen like me
-
Does what it says on the tin in LV x64
-
I already have my own vis that convert using the windows api calls. I was kind-a hoping they were more than that . I originally looked at it all when I wrote PassaMak, but decided to release it without Unicode support (using the api calls) to maintain cross-platform. Additionally I was put off by the hassles with special ini settings, the pain of handling standard ASCII and a rather woolly dependency on code pages - it seemed a one OR the other choice and not guaranteed to work in all cases. As with most of my stuff, I get to re-visit periodically and recently started to look again with a view to using UTF-8 which has a the capability of identifying ASCII and Unicode chars (regardless of code pages) which should make it fairly bulletproof and boil down to basically inserting bytes (for the ASCII chars) if the ini is set and not if it isn't. Well that's the theory at least, and so far, so good. Although I'm not sure what LV will do with 3 and 4 byte chars and therefore what to do about it. That's the next step when I get time.
-
How to match the data type when I call a DLL
ShaunR replied to Yanan's topic in Calling External Code
Indeed. I think for the most part it is true. Especially for OSs However, I have met on numerous occasions programmers who prefer to just use "int" in the code (as opposed to in32, int64, long, double word et. al - there's so many.) and use a compiler directive when compiling for x64 or x32 or even x16 (therefore converting ALL ints to whatever). Seems particularly prevalent when porting. Like you said. Programmers are lazy -
Hell hath no fury like a woman scorned (or ignored ) Yes and no. It depends on what exactly you are talking about. Extending a class by adding methods/properties? Or inheriting from that class and overriding/modifying existing properties and methods. My proposition is that is that my method identical to the latter. I assumed you meant the latter since you were speaking in context with a closed component that would be extended by the user. In that scenario the user only has the inheritance option (there are other differences but nothing significant I don't think). Or maybe I just missed what you are trying to say. But it's interesting you say that I can extend the operations. i would have argued (was expecting) the opposite since in theory, although properties are synonymous to "controls". Operations seem fixed. But I'll leave that one for you to think about how I might respond since I appear to be basically arguing against myself Problem? No. It's much more of a problem building an executable with them. There are many more ways to shoot yourself in the foot with OOP. But for the case when you don't have a top level VI there are a couple of "tricks" from the old boys...... Many moons ago there used to be a "VI Tree.vi". You will see it with many drivers and I think it is still part of the requirement for a LV driver (although haven't checked recently). Is it to show everyone what the VI hierarchy is? Well. Yes. But thats not all. It's also the "replacement" application to ensure all VIs are loaded into memory. However, with the advent of "required" and "optional" terminals, it's effectiveness was somewhat diminished since you can no-longer detect broken VIs without wiring everything up. The other method (which I employ) is to create test harnesses for grouped modules (system tests). You will see many of my profferings on lava come with quite a few examples. This is because they are a subset of my test harnesses so they are no extra effort and help people understand how to use them. Every new module, gets added to the test harnesses and the test harnesses get added to a "run test" vi. That is run after every few changes (take a look at the examples in the SQLite API). Its not a full factorial test harness (that's done later), but it ensures that all the VIs are loaded in memory and is a quick way to detect major bugs introduced as you go along. Very often they end up being a sub-system in the actual application. Labview is well suited to top-down design due to its hierarchical nature. Additionally top-down design is well suited to design by specification decomposition. Drivers, on the other hand lend themselves to bottom-up. However. for top-down, I find that as you get further down the tree, you get more and more functional replication where many functions are similar (but not quite identical) and that is not condusive to re-use and modularisation (within the project). I use a "diamond" approach (probably not an official one, but describes the resulting architecture) which combines top-down AND bottom-up which (i find) exposes "nodes" (the tips of the diamonds) that are ripe for modularisation and provide segmentation (vertically and horizontally) for inter-process comms. Is this because you have only made a relationship between typedefs as clusters being synonymous with a classes' data control only? What about an enumeration as a method? Ahhhh. IC. Here is your example "fixed" Everything is behaving as it should. But you are assuming that the the data you are supplying is linked to the container. It isn't, therefore you are (in fact) supplying it with the wrong data rather than the bundle/unbundle selecting the wrong cluster data, It's no wonder I've never seen it. It's a "type definition" not a "data definition". That's why I use the phrase "automagically" Disagree. There has to be much, much more to justify the complexity switch and the ditching of data-flow. I do know when a change will impact other areas, because my designs are generally modularised and therefore contained within a specific segment rather than passed (transparently) through umpteen objects that may or may not exist at any one point in time. Ermm. Nope. You don't have multiple instances of a cluster. We are in the "data-driven" world here. A cluster is just a way of viewing or segmenting the data. It. the data thats important, not the container or the access method. Yes a class has a cluster as the data member. But thats more to do with realising OOP with labview than anything else. If anything the similarity is between a data member and a local variable that is protected by accessors. Ahh. I'm with you now. Sounds complicated I prefer files That reminds me of the "Programmers Quick Guide To the Languages'" entry for C++ (I've posted it on here before) I'm sure
-
Ooooh. where are they? Indeed. I think most people (including myself) generally think that unicode support = any language support. Although it's a bit of a leap. If the goal is simply to make multiligual labview interfaces then unicode can be ignored completely in favour of UTF8 which isn't code-page dependent (I've been playing with this recently and wrote my own to detect and convert to the LV unicode so you don't get all the spaces). This would mean the old programs would still function correctly (in theory I think, but still playing).
-
Here ya go
-
Jut a note. This property node only tells you if the VI is in front of other VIs. If you click on (say) your web browser, it will still say it is front most.
-
Lots of excellent points here. I'll break them up into different posts since it's getting rather tedious reading my OWN posts in 1 go 21 again? Happy b'day Mrs Daklu I don't think this is so. To extend a component that uses a typedef it is just a matter of selecting "create sub-vi" and then "create constant" or "Create control/indicator". Then the new vi inherits all the original components functionality and you are free to add more if you desire (or hide). Well. You do. In classical Labview, loading the top level application loads ALL vis into memory. (Unless it is dynamically loaded). A number of points here: 1. Adding data (a control?) to a typedef cluster won't break anything (only clusters use bundle and unbundle). All previous functionality is preserved, but the new data will not be used until you write some code to do it. The previso here (as you say) is to use "bundle/unbundle by name" (see point #3) and not straight bundling or array to cluster functions (which have a fixed number of outputs) . The classic use, however, is a typedef'd enumerated control which can be used by various case structures to switch operations and are impervious to re-ordering or renaming of the enum contents. 2. Renaming may or may not break (as you state). If it's a renamed enumeration, string, boolean etc (or base type as I call them). Then nothing changes. If it's an element in a cluster, then it will. 3. I've never seen a case (nor can I see how) where an "unbundle/bundle by name" has ever chosen the wrong element in a typedef'd cluster or indeed a normal cluster (I presume you are talking about clusters because any control can be typedef'd). A straight unbundle/bundle, I can understand (they are index based) but that's nothing to do with typedefs ( I never use them a) because of this and b) by-name improves readability) An example perhaps? I think a class is a bit more than just a "super" typedef. In fact, I don't see them as the same at all. A typedef is just a control that has this special ability to propagate it's changes application wide. A class is a "template" (it doesn't exist until it is instantiated) for a module (a nugget of code if you like) . If you do see classes and typdefs as synonymous, then that's actually a lot of work for very little gain. Each new addition to a cluster (classes data member?) would require 2 new VIs (methods). 10 elements, 20 VI's Contrast this with adding a new element to a type-def'd cluster. No new VIs. 10 elements, 1 control (remember my single-point maintenance comment?) . Immutable objects? You mean a "constant" right? It''d probably be better than my internet connection recently. The other night I had 20 disconnects More to follow....
-
Let me know how you get on. I've been meaning to re-visit it, but it dos everything I need it to at the moment so couldn't find an excuse
-
Why are they password protected? Will we be seeing proper unicode support soon
-
This is the sort of thing Dispatcher was designed for. Each cell would have a dispatcher and the manager simply subscribes to the test cell that he wishes to see. You can either just send an image of the FP for a single cell, and/or write a little bit of code to amalgamate the bits they are interested in from multiple cells to make process overviews. Of course it would require the LV run-time on the managers machine.
-
It works quite well for FP stuff. But you can't localise dialogues (for any app and error messages).
-
I would spend more time on the VI functionality if I were you. Pretty pictures won't make it work any better.
-
You are better off having separate files for each language. Then you can have an ascii key (which you can search) and place the real string that into the control. It also enables easy switching just by specifying a file name. eg. msg_1=種類の音を与えます You could take a look at Passa Mak. It doesn't support Unicode, but it solves a lot of the problems you will come across. Alternatively you could use a database so that you can use sql queries instead of string functions to get your strings (its labviews string functions/parsing that cause most of the problems).
-
The config file vi's do not support Unicode . They use ascii operations internally for comparison (no string operations in LV currrently support Unicode) You will have to read it as a standard text file then convert it with the tools above back to ascii.
-
What would be nice is for the LV project manager to be able to handle (nest?) multiple projects (the way many other IDEs do). Then we could make a project for each target and add them to a main project. (I know we sort of have this with lvlibs. but it's not quite the same thing). Once we had that, each sub-project would just be a branch off the main svn (or Mercurial if you like ) trunk (main project) and we could work on each target in isolation if we wanted to. Unless of course we already can and I haven't figured it out yet.
-
The implode 1D array implodes (or concatenates rather than separates) each value in the array to a quoted string. The value is. Well. the value (e.g 3.1). the Field is the field name (e.g Time). If you want to set an affinity for a field (REAL, INTEGER, TEXT et al.), that is achieved when you create the fields with the "Create" table. Anything can be written to any field type and SQLite will convert it to the defined affinity for storage. The API always writes and reads as a string (it uses a string basically like a variant), but SQLite converts it automagically.
-
This is what I consider "most" labview programmers (regardless of traditional or not) to be and the analogy I've used before is between "pure" mathematicians and "applied" mathematicians. Pure mathematicians are more interested in the elegance of the maths and its conceptual aspects as opposed to "applied" which are more interested in how it relates to real-world application. Is one a better mathematician than the other? I think not. It's purely an emphasis. Both need to have an intrinsic understanding of the maths. I think most Labview programmers by the very nature of the programs they write and the suitability of the language to those programs are "applied" programmers, but that doesn't mean they don't have an intrinsic understanding of programming or indeed how to architecture it. Nice, pragmatic and modest post. I think many people are coming to this sort of conclusion in the wake of the original hype. As indeed happened to OOP in C++ more than 10 years ago. It's actually very rare to see a pure OOP application in any language. Most people (from my experience) go for encapsulation then use structured techniques. You've actually hit on the one thing "traditional" labview cannot do (or simulate). Run-time polymorphism (enabled by dynamic dispatch). However. there are very few cases that it is required (or desirable) unless, of course, you are using LVOOP. Then it's a must have to circumvent labviews requirement for design-time strict typing (another example of breaking Labviews in-built features to enable LVOOP). Well. That's how it seems to me at least. There may be some other reason, but in other languages you don't have "dynamic dispatch" type function arguments. But aside from that. I never use waterfall (for software at least). I find an iterative approach (or "agile" as it is called now-a-days) much more useful and manageable. Sure the whole project (including hardware, mechanics, etc) will be waterfall (it's easier for management to see progress and you have to wait for materials) but within that at the macro level the software will be iterative with release milestones in the waterfall. As a result, the software is much quicker to react to changes than the other disciplines which means that the software is never a critical path until the end-of-project test phase (systems testing - you can't test against all the hardware until they've actually built it). At that point, however, the iterative cycle just gets faster with smaller changes since by that time you are (should be ) reacting to purely hardware integration problems so it's fairly straight forward to predict.
-
Does keep having to put a tru boolean down annoy you as much as it does me?
-
Does anyone use RowID a lot? I'm finding that the more I use the API the more annoyed I am at having to always put a True boolean for No RowID (might be just my use cases). I'm thinking about inverting it so that the default is no rowid. Would this cause lots of issues to people already using it (if any...lol)