-
Posts
4,940 -
Joined
-
Days Won
307
Content Type
Profiles
Forums
Downloads
Gallery
Posts posted by ShaunR
-
-
There are a number of ways you can go about it. It depends on how you want to organise the data and what you want to do with it on screen.
If you are only going to show the last five minutes, then you can use a history chart/1 A 1Khz sample rate means about 300,000 samples (plot points) per channel which is a lot,so you will probably t have to decimate (plot every n points). However. It's worth bearing in mind, that you probably won't have 300,000 pixels in your graph anyway, so plotting them all is only really useful if you are going to allow them to zoom in. There are other ways (JGCodes suggestion is one, queues and a database are another). But that's the easiest and the hassle free way with minimum coding.
Ideally you want to stream the data into a nice text file - just as you would see it in an array (I use comma or tab delimited when I can). Then you can load it up in a text editor or spreadsheet and it will make sense and you won't need to write code to interpret it just to read it. You can always add that later if it's taking too long to load in the editor. If the messages are coming in 1,2,3,4,1234 etc then that's not a problem. However, it becomes a little more difficult if they are coming in ad-hock and you will need to find a way of re-organising it before saving so your text file table lines up. Hope you have a big hard-disk
Oh. And one final thought. Kill the Windows Indexing service (if you are using windows that is). You don't want to get 4 hours in and suddenly get a "file in use error"
-
While the nodes I spoke about probably make calls to the Windows API functions under Windows, they are native nodes (light yellow) and supposedly call on other platforms the according platform API for dealing with Unicode (UTF8 I believe) to ANSI and v.v. The only platforms where I'm pretty sure they either won't even load into or if they do will likely be NOPs are some of the RT and embedded platforms
Possible fun can arise out of the situation that the Unicode tables used on Windows are not exactly the same as on other platforms, since Windows has slightly diverged from the current Unicode tables. This is mostly apparent in collation which influences things like sort order of characters etc. but might be not a problem in the pure conversion. This however makes one more difficulty with full LabVIEW support visible. It's not just about displaying and storing Unicode strings, UTF8 or otherwise, but also about many internal functions such as sort, search etc. which will have to have proper Unicode support too, and because of the differences in Unicode tables would either end up to have slightly different behavior on different platforms or they would need to incorporate their own full blown Unicode support into LabVIEW such as the ICU library to make sure all LabVIEW versions behave the same, but that would make them behave differently on some systems to the native libraries.
Indeed. (to all of it). But its rather a must now as opposed to, say 5 years ago. Most other high level languages now have full support (even Delphi finally...lol). I haven't been critical about this so far, because NI came out with x64. As a choice of x64 or Unicode, My preference was the former and I appreciate the huge amount of effort that must have been. But I'd really like to at least see something on the roadmap.
Are these the VIs you are talking about?
These I've tried. They are good for getting things in and out of labview (e.g files or internet) but no good for display on the UI. For that the ASCII needs to be converted to UCS-2 BE and the Unicode needs remain as it is, ( UTF8 doesn't cater for that). And that must only happen if the ini switch is in otherwise it must be straight UTF8.
The beauty of UTF8 is that it's transparent for ASCII, therefore inbuilt LV functions work fine. I use a key as a lookup for the display string, which is ok as long as it is an ASCII string. I can live with that
The real problem is that once the ini setting is set (or a control is set to Force Unicode after it is set) it cannot be switched back without exiting labview or recreating the control. So on-the fly switching is only viable if, when it is set, ASCII can be converted. Unless you can think of a better way?
-
<snip>
Ahhhhh. I see what you are getting at now.
The light has flickered on (I have a neon one
)
I must admit, I did the usual. Identify the problem and find a quicker way to replicate it (I thought you were banging on about an old "feature" and I knew how to replicate it
). That's why I didn't follow your procedure exactly for the vid (I did the first few times to see what the effect was and thought "Ahhh, that old chestnut"). But having done so I would actually say the class was easier since I didn't even have to have any VI's open
So it really is a corner of a corner case Have you raised the Car yet?
But it does demonstrate (as you rightly say) a little understood effect.. I've been skirting around it for so long; I'd forgotten it. I didn't understand why it did it (never bothered like with so many nuancies in LV), I only knew it could happen and modified my workflow so it didn't happen to me
But in terms of effort defending against it. Well. How often does it happen? I said before, I've never seen it (untrue of course given what I've said before) so is it something to get out nickers in a twist about? A purist would say yes. An accountant would say "how much does it cost"?
Is awareness enough (in the same way I'm fidgety about windows indexing and always turn it off?). What is the trade off between detecting that bug and writing lots of defensive code or test cases that may be prone bugs themselves? I think it's the developers call. If it affects both LVOOP and traditional labview then it should be considered a bug or at the very least have a big red banner in the help
Still going to use typedefs though
-
Thanks jgcode & Matt,
I'm going to monitor the suspension of a car with 3 lineaire potetion meters. The important of all is that I have sample rates of 1 kHz. Higher is even better.
So the array will be quick huge. The LabVIEW program has to run the hole day on my laptop.
I'm getting my info trough UDP. There is a CAN network on and with a Analog2CAN converter i'm reading my potentio meters.
I get about 200 ID's in my program and I'm putting them in several arrays, but is this the best, less failure, fast way?
Is waveform then still better?
Looking forward for your answers,
____________
Michael ten Den
You are probably better off logging to a file since you will have a huge dataset.
-
Hi,
I'm a theoretical OOPer. My problem is that I'm stuck in 7.1. and really like uml...
So here my therotical 'insights'.
There is a line of developement in other languages as well from the cluster (record, struct) to the class. In this developement of programming languages, different features were added, merged, dismissed as evolution plays. Using a type def, you introduce the class(=*.ctl)/object(=wire) abstraction already. With the Action Engine, we got encapsulation (all data is private) and methods (including accessors). With LVOOP we have inheritance. Still LVOOP isn't supporting things that other languages do since ages (interfaces, abstract classes and methods). But on the other hand it allows for by-val implementation (objects that down have an identity) as well as by-ref.
I severly consider LVOOP unfinished, because it doesn't allow you to draw code the same as you do non-LVOOP code with wires and nodes. It's mainly some trees and config-windows.
But also I don't think the evolution of other OOP languages is not yet finished. See uml, where you only partially describe the system graphically, which means you never can create a compilable code (partially undefined behaviour). Also uml still has a lot of text statements (operations and properties are pure BNF text-statements).
So the merging towards a graphical OOP code is still work in progress.
Let's get practical. On my private project I have to deal with OOP designs (uml/xmi and LV-GObjects). One issue that isn't possible to handle with type def/AE is to represent the inheritance. Let's say I want to deal with control (parent), numeric control (child) and string control (child) and have some methods to serialize them to disk.
For a generic approach I started using variants. The classID (string) is changed to a variant. All properties I read from the Property nodes are placed as Variant attributes. This can even be nested, e.g. for dealing with lables (the get serialized as decoration.text as an object of their own and the set as attribute). Wow, I have compositions!
Well, I lose all compile time safety. But I wonder what I'd get when combining it with AEs and some 'plugin' way to get dynamic dispatch.
Ahh, wasn't C++ written with C?
Felix
I'm not sure I would agree that OOP is still evolving (some say its a mature methodology). But I would agree LVOOP is probably unfinished. The question is as we are already 10 years behind the others, will it be finished before the next fad
Since I think that we are due for another radical change in program designing (akin to text vs graphical was or structured vs OOP). It seems unlikely.
As for a plug-in way of invoking AEs. Just dynamically load them. If you make the call something like "Move.Drive Controller" or "Drive Controller.Move" (depending on how you like it), strip the "Move and use if for the action and load your "Drive Controller".vi. But for me, compile time safety is a huge plus for using labview.
-
Uhh... disconnecting the constants from the typedef doesn't fix the problem. The only output that changed is code path 2, which now outputs the correct value instead of an incorrect value at the cost of code clarity. I can easily imagine future programmers thinking, "why is this unbundling 's1' instead of using the typedef and unbundling 'FirstName?'" And doesn't disconnecting the constant from the typedef defeat the purpose of typedeffing clusters in the first place? You're going to go manually update each disconnected constant when you change the typedef? What happened to single point maintenance?
No it doesn't defeat the object of typedefs.
Typedef'd clusters (since you are so hung-up on just clusters
) are typically used to bundle and unbundle controls/indicators so compound/complex controls so we can have nice neat wires to and from VIs
. Additionally they can be used to add clarity an easy method to select individual components of the compund control.
The benefit as opposed to normal clusters is that a change propagates through the entire application so there is no need to go to every VI and modify a control/indicator just because you change the cluster. I (personally) have never uses typedef'd constants (or ever seen them used the way you are trying to use them) except as a datatype for bundle by name. As I said previously, it is a TypeDef not a Datadef.
Regardless, the constants was something of a sideshow anyway... like I said, I just discovered it today. The main point is what happens to the bundle/unbundle nodes wired to and from the conpane controls. (Paths 1, 3, 5, and 6.) Your fix didn't change those at all.
Results from Typedef Heaven:
<snip>
Well. I'm not sure what you are seeing. Here is a vid of what happens when I do the same.
http://www.screencast.com/users/Imp0st3r/folders/Jing/media/6d552790-5293-4b47-85bc-2fcb1402b085
All the names are John (which I think was the point). Sure the bundles change, so now the 0th container is labeled "LastName". But its just a label for the container (could have been z5ww2qa). But because you are imposing ordered meaning on the data you are supplying, I think you are expecting it to read your intentions and choose an appropriate label to match your artificially imposed meaningful data.You will have noticed that when you change the cluster order (again something I don't think most people do - but valid), the order within the cluster changed too (Lastname is now at the top). So what you have done is change into which container the values are stored. they are both still stored. They will all be taken out of the container that you stored them in. Only you are now storing the first name (data definition) in the last name (container).
If you are thinking this will not happen with your class....then how about this?l.
http://www.screencast.com/users/Imp0st3r/folders/Jing/media/672c5406-a56d-4c7a-a177-ab31a3c0cd15
I see your point with respect to the cluster constants, though as I mentioned above I'm not convinced disconnecting the constant from the typedef is a good general solution to that problem.
What problem?
I think you are seeing a typedef as more than it really is and you have probably found an edge case which seems to be an issue for your usage/expectation. It is just a control. It even has a control extension. It's no more an equivalent to a class than it is to a VI. The fact you are using a bundle/unbundle is because you are using a compound control (cluster) andt has little to do with typedefs. Making such a control into a typedef just means we don't have to go to every VI front panel and modify it manually when we change the cluster.
Specification? You get a software spec? And here I thought a "spec document" was some queer form of modern mythology. (I'm only half joking. We've tried using spec documents. They're outdated before the printer is done warming up.)
Yup. And if one doesn't exist, I write one (or at least title a document "Design Specification"
) by interrogating the customer But mainly our projects are entire systems and you need one to prove that the customers requirements have been met by the design. Seat-of-yer-pants programming only works with a lot of experience and a small amount of code.
It's past the stupid hour in my timezone... I don't understand what you're asking.
My concern with typedeffed enums is the same concern I have with typedeffed clusters. What happens to a preset enum constant or control on an unloaded block diagram when I make different kinds of changes to the typedef itself? (More precisely, what happens to the enum when I reload the vi after making the edits?)
It's nothing to do with in memory or not (I don't think). What you are seeing is the result of changing the order of the components within the cluster. An enum isn't a compound component so there is no order associated.
Using a class as a protected cluster is neither complex nor disposes of data flow. There are OO design patterns that are fairly complex, but it is not an inherent requirement of OOP.
So your modules either do not expose typedefs as part of their public interface or you reuse them in other projects via copy and paste (and end up with many copies of nearly identical source code,) right?
Nope. The source is in SVN. OK you have to have a copy of the VIs on the machine you are working on in the same way that you have to have the class vis present to be able to use them. So I'm not really sure what you are getting at here.
A module that might expose a typedef would be an action engine. I have a rather old drive controller, for example, that has an enumerated typedef
with Move In, Move Out, Stop, Pause, Home. If I were to revisit it then I would probably go for a polymorphic VI instead purely because it would only expose the controls for that particular function (you don't need a distance parm for Home or Stop for example) rather than just ignoring certain inputs.But its been fine for 3 years and if it "'aint broke, don't fix it"
My fault for not being clear. I meant multiple instances of a typedeffed cluster. I was freely (and confusingly) using the terms interchangably. Dropping two instances of the same class cube on the block diagram is essentially equivalent to dropping two instances of a typedeffed cluster on the block diagram. Each of the four instances on the block diagram has it's own data space that can be changed independently of the other three.
I suppose. But it's not used like that and I cannot think of a situation where you would want to (what would be the benefit?) Its used either as a control, or as a "Type Definition" for a bundle-by-name. It's a bit like laying down a queue reference constant. Sure you can. But why would you? Unless of course you want to impose "Type" or cast it.
No. Based on this and a couple other comments you've made, it appears you have a fundamental misunderstanding of LVOOP. Labview classes are not inherently by-ref. You can create by-ref or singleton classes using LVOOP, but data does not automatically become by-ref just because you've put it in a class. Most of the classes I create are, in fact, by-val and follow all the typical rules of traditional sequential dataflow. By-ref and singleton functionality are added bonuses available for when they are needed to meet the project's requirements.
Maybe I don't.
But I do know "by-val" doesn't mean it's "data-flow" any more than using a "class" means "object oriented". Like you said. It's up to the programmer. It's just that the defaults are different. In classic labview, the default is implicit state with single instances. In LVOOP its multiple instance with managed state. Either can be made to do the other. It's just the amount of work to turn one into the other. Well that's how it seems to a heathen like me
-
Not sure how this will translate if it's plopped in a 64-bit application, if someone cares to test it that'd be great (my guess is WoW64 will take care of everything, but you never know...):
Does what it says on the tin in LV x64
-
There used to be a library somewhere on the dark side that contained them. It was very much like my unicode.llb that I posted years ago and which called the Windows WideCharToMultibyte and friends APIs to do the conversion but also had extra VIs that were using those nodes. And for some reasons there was no password, eventhough they usually protect such undocumented functions strictly.
I'll try to see if I can find something either on the fora or somewhere on my HD.
Otherwise, using Scripting possibly together with one of the secret INI keys allows one to create LabVIEW nodes too, and in the list of nodes these two show up too.
I already have my own vis that convert using the windows api calls. I was kind-a hoping they were more than that
. I originally looked at it all when I wrote PassaMak, but decided to release it without Unicode support (using the api calls) to maintain cross-platform. Additionally I was put off by the hassles with special ini settings, the pain of handling standard ASCII and a rather woolly dependency on code pages - it seemed a one OR the other choice and not guaranteed to work in all cases.
As with most of my stuff, I get to re-visit periodically and recently started to look again with a view to using UTF-8 which has a the capability of identifying ASCII and Unicode chars (regardless of code pages) which should make it fairly bulletproof and boil down to basically inserting bytes (for the ASCII chars) if the ini is set and not if it isn't. Well that's the theory at least, and so far, so good. Although I'm not sure what LV will do with 3 and 4 byte chars and therefore what to do about it. That's the next step when I get time.
-
Actually it is a bit more complicated (or not) than that. On all 16 bit systems int used to be 16 bit and on 32 bit systems it is 32 bit. So far so good.
For 64 bits things get a bit messy. int here is still always 32 bit (well for the majority of systems, some more exotic systems use actually 64 bit int's) as detailed here (Specific C-language data models).
The most interesting part is however with longs where Linux 64 bit uses 64 bits, while Microsoft Windows chose to use 32 bit longs.Linux is more forgiving to code that casts pointers into longs while Windows is more forgiving to code that assumes sizeof(long) == sizeof(int). Both assumptions have of course no place in modern software, but many programmers can be sometimes a bit lazy.
Indeed. I think for the most part it is true. Especially for OSs However, I have met on numerous occasions programmers who prefer to just use "int" in the code (as opposed to in32, int64, long, double word et. al - there's so many.) and use a compiler directive when compiling for x64 or x32 or even x16 (therefore converting ALL ints to whatever). Seems particularly prevalent when porting. Like you said. Programmers are lazy
-
You suck up.
Hell hath no fury like a woman scorned (or ignored
)
You're limited to extending what operations are done on the data. Using classes I can extend what operations are done on the data AND extend it to different kinds of data.
Yes and no
. It depends on what exactly you are talking about. Extending a class by adding methods/properties? Or inheriting from that class and overriding/modifying existing properties and methods. My proposition is that is that my method identical to the latter. I assumed you meant the latter since you were speaking in context with a closed component that would be extended by the user. In that scenario the user only has the inheritance option (there are other differences but nothing significant I don't think). Or maybe I just missed what you are trying to say. But it's interesting you say that I can extend the operations. i would have argued (was expecting) the opposite since in theory, although properties are synonymous to "controls". Operations seem fixed. But I'll leave that one for you to think about how I might respond since I appear to be basically arguing against myself
Yep... dynamically loaded vis present a problem. You also have a problem if you have a sub vi that isn't part of the dependency chain--perhaps you temporarily removed it or are creating separate conceptual top-level code that uses the same low level functions. There are lots of ways to shoot yourself in the foot when your workflow depends on vis being loaded into memory. (I have two feet full of holes to prove it.)
Problem? No. It's much more of a problem building an executable with them. There are many more ways to shoot yourself in the foot with OOP. But for the case when you don't have a top level VI there are a couple of "tricks" from the old boys......
Many moons ago there used to be a "VI Tree.vi". You will see it with many drivers and I think it is still part of the requirement for a LV driver (although haven't checked recently). Is it to show everyone what the VI hierarchy is? Well. Yes. But thats not all. It's also the "replacement" application to ensure all VIs are loaded into memory. However, with the advent of "required" and "optional" terminals, it's effectiveness was somewhat diminished since you can no-longer detect broken VIs without wiring everything up.
The other method (which I employ) is to create test harnesses for grouped modules (system tests). You will see many of my profferings on lava come with quite a few examples. This is because they are a subset of my test harnesses so they are no extra effort and help people understand how to use them. Every new module, gets added to the test harnesses and the test harnesses get added to a "run test" vi. That is run after every few changes (take a look at the examples in the SQLite API). Its not a full factorial test harness (that's done later), but it ensures that all the VIs are loaded in memory and is a quick way to detect major bugs introduced as you go along. Very often they end up being a sub-system in the actual application.
Actually your comment reveals a fundamental difference between our approaches. "Loading the top level application" implies a top-down approach. After all, you can't load the top level vi during development if you don't have one. I tried that for a while but, for many reasons, abandoned it. I have had much more success building functional modules with fairly generic interfaces from the bottom up and assembling the top level application from the components.
Labview is well suited to top-down design due to its hierarchical nature. Additionally top-down design is well suited to design by specification decomposition. Drivers, on the other hand lend themselves to bottom-up. However. for top-down, I find that as you get further down the tree, you get more and more functional replication where many functions are similar (but not quite identical) and that is not condusive to re-use and modularisation (within the project). I use a "diamond" approach (probably not an official one, but describes the resulting architecture) which combines top-down AND bottom-up which (i find) exposes "nodes" (the tips of the diamonds) that are ripe for modularisation and provide segmentation (vertically and horizontally) for inter-process comms.
My comments are directed at typedeffed clusters. I'm still on the fence with typedeffed enums in a public interface. I can see holes where changes might cause things to break, but I haven't explored them enough yet.
Is this because you have only made a relationship between typedefs as clusters being synonymous with a classes' data control only? What about an enumeration as a method?
Sure thing. Grab the attached project (LV2010) and follow the steps in Instructions.txt. The example should only take a couple minutes to work through. Once you do that come back and continue reading.
Ahhhh. IC. Here is your example "fixed"
Everything is behaving as it should. But you are assuming that the the data you are supplying is linked to the container. It isn't, therefore you are (in fact) supplying it with the wrong data rather than the bundle/unbundle selecting the wrong cluster data, It's no wonder I've never seen it. It's a "type definition" not a "data definition".
"Single point maintenance" of typedeffed clusters is an illusion.
That's why I use the phrase "automagically"
Absolutely it is. My point was that even if a class was nothing more than a protected typedef, there are enough advantages just in that aspect of it to ditch typdeffed clusters and use classes instead. Don't underestimate the value of *knowing* a change won't negatively impact other areas of code. Some may consider adequate testing the proper way to deal with bugs my example illustrate. I prefer to design my code so the bugs don't get into the code in the first place. (I call it 'debugging by design,' or alternatively, 'prebugging.')
Disagree.
There has to be much, much more to justify the complexity switch and the ditching of data-flow. I do know when a change will impact other areas, because my designs are generally modularised and therefore contained within a specific segment rather than passed (transparently) through umpteen objects that may or may not exist at any one point in time.
The object is instantiated with default values as soon as you drop the class cube on the block diagram, just like a cluster. What do you do if you want multiple instances of a cluster? Drop another one. What do you do if you want multiple instances of a class? Drop another one. What does a class have inside its private ctl? A cluster. How do you access private data in a class method? Using the bundle/unbundle prims. At it's core a class is a cluster with some additional protection (restricted access to data) and features (dynamic dispatching) added to it.
Ermm. Nope. You don't have multiple instances of a cluster. We are in the "data-driven" world here. A cluster is just a way of viewing or segmenting the data. It. the data thats important, not the container or the access method. Yes a class has a cluster as the data member. But thats more to do with realising OOP with labview than anything else. If anything the similarity is between a data member and a local variable that is protected by accessors.
Nope, I mean immutable objects. Constant values are defined at edit-time. The values of immutable objects are defined at run-time. I might have many instances of the same class, each with different values, each of which, once instantiated by the RTE is forever after immutable and cannot be changed.
Ahh. I'm with you now. Sounds complicated
I prefer files
That reminds me of the "Programmers Quick Guide To the Languages'" entry for C++ (I've posted it on here before)
YOUR PROGRAMMING TASK: To shoot yourself in the foot.
C++: You accidentally create a dozen instances of yourself and shoot them all in the foot. Providing emergency medical assistance is impossible since you can't tell which are bitwise copies and which are just pointing at others and saying, "That's me, over there."
I can't wait...
I'm sure
-
Likely because they make use of the undocumented UTF 16 nodes that are in LabVIEW since about 8.6. And these nodes are likely undocumented because NI is still trying to figure out how to expose that functionality to the LabVIEW programmer without bothering him with underlying Unicode difficulties including but certainly not limited to UTF16 on Windows v. UTF32 on anything else (except those platforms like embedded RT targets were UTF support usually is not even present, which is an extra stumble block to make generic UTF LabVIEW nodes]). Of course they can include the IBM ICU library or something along that line but that is a noticable extra size for an embedded system.
Ooooh. where are they?
It all depends what you consider as "proper". Those nodes will likely make it into one of the next LabVIEW versions. However to support Unicode in every place including the user interface (note LabVIEW supports proper multibyte encoding already there) will be likely an exercise with many pitfalls, resulting in an experience that will not work right in the first few versions, and might even cause troubles in non unicode use cases (which is likely the main reason they haven't really pushed for it yet). Imagine your normal UI's suddenly starting to misbehave because the unicode support messed something up, and yes that is a likely scenario, since international character encoding with multibyte and unicode is such a messy thing.
Indeed. I think most people (including myself) generally think that unicode support = any language support. Although it's a bit of a leap. If the goal is simply to make multiligual labview interfaces then unicode can be ignored completely in favour of UTF8 which isn't code-page dependent (I've been playing with this recently and wrote my own to detect and convert to the LV unicode so you don't get all the spaces). This would mean the old programs would still function correctly (in theory I think, but still playing).
-
Does anybody have another idea ???
Here ya go
-
There is a property node to check the current VI:
Jut a note.
This property node only tells you if the VI is in front of other VIs.
If you click on (say) your web browser, it will still say it is front most.
-
Lots of excellent points here. I'll break them up into different posts since it's getting rather tedious reading my OWN posts in 1 go
Wow... lots of comments and limited time. (It's my wife's bday today; can't ignore her and surf Lava too much.)
21 again?
Happy b'day Mrs Daklu
It doesn't make it not reusable. Rather, it limits your ability to reuse it. A good reusable component doesn't just wrap up a bit of functionality for developers. It gives them extension points to add their own customizations without requiring them to edit the component's source code. When a component exposes a typedef as part of its public api it closes off a potentially beneficial extension point.
I don't think this is so. To extend a component that uses a typedef it is just a matter of selecting "create sub-vi" and then "create constant" or "Create control/indicator". Then the new vi inherits
all the original components functionality and you are free to add more if you desire (or hide).
My bigger issue with typedefs is that it makes it harder to refactor code during development, which I do a lot. I know typedefs are the golden boy of traditional Labview programmers, due (I think) to their ability to propogate changes through the project. Here's the rub... changes propogate only if the vis that use the typedef are loaded into memory. How do you know if everything that depends on that typedef is loaded in memory? Unless you are restricting where the typedef can be used (by making it a private member of a library, for instance) you don't.
Well. You do. In classical Labview, loading the top level application loads ALL vis into memory. (Unless it is dynamically loaded).
"But," you say, "the next time a vi that depends on the typedef is loaded it will link to the typedef and all will be well." Maybe it will, maybe it won't. Have you added new data to the typedef? Renamed or reordered elements to improve clarity? Sometimes your edits will cause the bundle/unbundle nodes to access the wrong element type, which results in a broken wire. So far so good. However, sometimes the bundle/unbundle node will access the wrong element of the same type as the original, in which case there's nothing to indicate to you, the developer, that this has happened. (Yes, this does happen even with bundle/unbundle by name.) You have to verify that it didn't happen by testing or by inspection.
A number of points here:
1. Adding data (a control?) to a typedef cluster won't break anything (only clusters use bundle and unbundle). All previous functionality is preserved, but the new data will not be used until you write some code to do it. The previso here (as you say) is to use "bundle/unbundle by name" (see point #3) and not straight bundling or array to cluster functions (which have a fixed number of outputs) . The classic use, however, is a typedef'd enumerated control which can be used by various case structures to switch operations and are impervious to re-ordering or renaming of the enum contents.
2. Renaming may or may not break (as you state). If it's a renamed enumeration, string, boolean etc (or base type as I call them). Then nothing changes. If it's an element in a cluster, then it will.
3. I've never seen a case (nor can I see how) where an "unbundle/bundle by name" has ever chosen the wrong element in a typedef'd cluster or indeed a normal cluster (I presume you are talking about clusters because any control can be typedef'd). A straight unbundle/bundle, I can understand (they are index based) but that's nothing to do with typedefs ( I never use them a) because of this and b) by-name improves readability) An example perhaps?
Classes are superior to typedefs for this reason alone. If I rename an accessor method (the equivalent to renaming a typedef element) I don't have to worry that somewhere in my code Labview might substitute a different accessor method with the same type output. If it can't find exactly what it's looking for I get a missing vi error. It might take me 2 mintues to create a class that essentially wraps a typedeffed cluster, but I save loads of time not having to verify all the bundle/unbundle nodes are still correct.
I think a class is a bit more than just a "super" typedef. In fact, I don't see them as the same at all. A typedef is just a control that has this special ability to propagate it's changes application wide. A class is a "template" (it doesn't exist until it is instantiated) for a module (a nugget of code if you like) . If you do see classes and typdefs as synonymous, then that's actually a lot of work for very little gain. Each new addition to a cluster (classes data member?) would require 2 new VIs (methods). 10 elements, 20 VI's
Contrast this with adding a new element to a type-def'd cluster. No new VIs. 10 elements, 1 control (remember my single-point maintenance comment?)
.
Another thing I've been doing lately is using classes to create immutable objects. I give the class a Create method with input terminals for all the data and appropriate Get accessors. There are no Set accessors. Once the object is created I can pass it around freely without worrying that some other process might change a value. This saves me time because I never even have to wonder what happens to the data, much less trace through the code to see who does what to it.
In short, using classes instead of typedefs gives me, as a developer, far, far, more confidence that changes I'm making aren't having negative effects elsewhere in my code. That translates directly into less time analyzing code before making a change and testing code after the change.
Immutable objects? You mean a "constant" right?
Doesn't mean there aren't better alternatives. Sending messages by telegraph was a well-established technique for many years. Can I expect your response to come via Western Union?
It''d probably be better than my internet connection recently.
The other night I had 20 disconnects
More to follow....
-
Looks exactly like the kind of thing I'm looking for. I'll give it a whirl.
Let me know how you get on. I've been meaning to re-visit it, but it dos everything I need it to at the moment so couldn't find an excuse
-
I've done something similar in the past. I treat the file as a whole as ASCII, but the value of each of the keys may be ASCII or Unicode. Then in LV I interpret the key values accordingly, converting them from a string to a Unicode string if necessary. I've posted a bunch of Unicode tools including file I/O examples on NI Community, that may help you out.
Why are they password protected?
Will we be seeing proper unicode support soon
-
My main application controls various tests in 18 different test cells. From my "Main" vi I call up a test cell, test type, parameters, etc. The test grabs a template vi, loads the parameters and this template copy becomes the display and control of that particular test. To reduce screen clutter users can close the running test front panel but can re-open it if they want to check the status of the test later.
Problem: a couple of the managers want to monitor these tests from their desks. That does not sound like much of a problem except I want the managers to see the status only and not have any of the functionality of the test interface.
What I want: I thought you could call a vi by it's reference and look at it's front panel remotely. No functioning, just display the front panel. But I can't seem to find exactly how to do this without getting into remote front panels which seem like overkill for what I am trying to do.
Suggestions?
This is the sort of thing Dispatcher was designed for.
Each cell would have a dispatcher and the manager simply subscribes to the test cell that he wishes to see. You can either just send an image of the FP for a single cell, and/or write a little bit of code to amalgamate the bits they are interested in from multiple cells to make process overviews. Of course it would require the LV run-time on the managers machine.
-
I haven't tried it but there might be another way of going about it with the import and export string commands. There's an app reference invoke node that can batch all the different VIs together into one text file.
It works quite well for FP stuff. But you can't localise dialogues (for any app and error messages).
-
ok, i'd been working my ass brain out since like 2 weeks ago to build this .VI.... and... welll.... i don't like it...
:(
do you know where can i find nicer pictures
...
(even if they are not made by me) ... to build this kind of animation...
i used corel Draw... but that's suposed to be a pump... and it seems like an "escafandra"
I would spend more time on the VI functionality if I were you. Pretty pictures won't make it work any better.
-
Mmm, thanks for that. I was hoping to have an ini file with the required contents of string controls in different languages, like this:
[ctrlOne]
jp=種類の音を与えます
en=hello
It becomes quite tricky it seems... I'll keep trying & post back if I figure it out. I do have the modification to the labview .ini file that lets me view unicode characters in string controls so it is searching/extracting from the file that is the tricky part now.
Where there's a will there's a way and all that...
You are better off having separate files for each language. Then you can have an ascii key (which you can search) and place the real string that into the control. It also enables easy switching just by specifying a file name.
eg.
msg_1=種類の音を与えます
You could take a look at Passa Mak. It doesn't support Unicode, but it solves a lot of the problems you will come across.
Alternatively you could use a database so that you can use sql queries instead of string functions to get your strings (its labviews string functions/parsing that cause most of the problems).
-
Hi there,
I'm having exactly the same problem as this - I have created a .ini file and saved it in notepad as unicode format.
When I try and read the file using the config file vis the section and key names aren't found. I've tried with and without the use of the str-utf16 vis that are included here. I can read a 'normal' ascii ini file fine.
Has anyone experienced anything similar?
Thanks,
Martin
The config file vi's do not support Unicode . They use ascii operations internally for comparison (no string operations in LV currrently support Unicode)
You will have to read it as a standard text file then convert it with the tools above back to ascii.
-
What would be nice is for the LV project manager to be able to handle (nest?) multiple projects (the way many other IDEs do). Then we could make a project for each target and add them to a main project. (I know we sort of have this with lvlibs. but it's not quite the same thing). Once we had that, each sub-project would just be a branch off the main svn (or Mercurial if you like
) trunk (main project) and we could work on each target in isolation if we wanted to.
Unless of course we already can and I haven't figured it out yet.
-
The Implode 1D array separates the field/value pairs. The value would be the datatype, isn t it? For example, integer, real etc.
So, how should I input these values. If I want the datatype to be real and field name time, should I say "time/real" or "time real" ?
Please advise.
Thanks,
Subhasis
The implode 1D array implodes (or concatenates rather than separates) each value in the array to a quoted string. The value is. Well. the value (e.g 3.1). the Field is the field name (e.g Time).
If you want to set an affinity for a field (REAL, INTEGER, TEXT et al.), that is achieved when you create the fields with the "Create" table. Anything can be written to any field type and SQLite will convert it to the defined affinity for storage. The API always writes and reads as a string (it uses a string basically like a variant), but SQLite converts it automagically.
-
OK, you clearly don't work where I work
We've got no end of people around here that use all of those languages as well as LabVIEW and consider themselves programmers - this is especially true of the researchers (PHD's in many scientific disciplines). But many have no idea how to architect code (notice I avoid saying most, since I can't provide hard data
) and no matter the language, they write spaghetti code. And there are more than a few around here who's job is to architect and develop code in LabVIEW - and we're trained in many disciplines but we all have comp sci education as well. But in the end, what matters to our customers is "do our test and measurement systems work" and that's why we have to recruit people for our team with varied backgrounds (heck, my undergrad was in ME) because it's not enough to understand code development, you have to understand the problem you're trying to solve.
Mark
This is what I consider "most" labview programmers (regardless of traditional or not) to be and the analogy I've used before is between "pure" mathematicians and "applied" mathematicians. Pure mathematicians are more interested in the elegance of the maths and its conceptual aspects as opposed to "applied" which are more interested in how it relates to real-world application. Is one a better mathematician than the other? I think not. It's purely an emphasis. Both need to have an intrinsic understanding of the maths. I think most Labview programmers by the very nature of the programs they write and the suitability of the language to those programs are "applied" programmers, but that doesn't mean they don't have an intrinsic understanding of programming or indeed how to architecture it.
Like most LabVIEWer's I started the out in the world using Traditional LabVIEW techniques and design patterns e.g. as taught in NI Courses etc... Of course, I implemented these rather poorly, and had a limited understanding at the time (hey - I was learning after-all!). After a while I discovered LVOOP, and above all, encapsulation saved my apps (I cannot overstate this enough). I then threw myself into the challenge of using LVOOP exclusively, without fail, on every project - for every implementation. This was great in terms of a short learning curve, but what I discovered was that I was creating very complex interactions for every program.
(Whilst I quickly admit I am not full bottle on OOP design patterns) I found these implementations were very time consuming. I also saw colleagues put together projects much faster than I could Traditionally, and they were achieving similar results (although IMHO using LVOOP is much easier to make simple changes and test), but I wanted to weigh up the time involved and answer the question ...could I do it better?
Pre-8.2 (aside from RT where we could only start using Classes in 2009), people (some very smart ones at that - who have been around for ages in the LabVIEW community) have been solving problems without LVOOP, successfully. This lead me to recently undergo a reassessment of my approach. My aim was to look at the Traditional techniques, now having a better understanding of them (and LabVIEW in general), and reintegrate them with what I was doing in LVOOP etc... - and I am having success (and more importantly fun!).
Damn, I have even started to find I like globals
.
Anyways, at the end of the day I find using what works and not trying to make something fit is the best and the most flexible approach. With the aim of becoming a better programmer, I hope I continue this iterative approach to my learning (and of course these means I want to keep learning about LVOOP and OOP too as part of this too).
JG says enjoy the best of both worlds!
Nice, pragmatic and modest post.
I think many people are coming to this sort of conclusion in the wake of the original hype. As indeed happened to OOP in C++ more than 10 years ago. It's actually very rare to see a pure OOP application in any language. Most people (from my experience) go for encapsulation then use structured techniques.
I think the discussion has been interesting. My opinion is that encapsulation is what matters most in increasing maintainability and reuse and reducing bugs. Keeping as many routines private as you can, and minimizing the interface (the number and complexity of the public routines) is the goal.
The LV library (lvlib), and it's cousin the lvclass, the have been a big help to the language in this regard, despite other annoyances. I think Shaun has some valid criticisms about the need to maintain more state when using an lvclass, so I only use them when I need dynamic dispatching. Another problem with classes (objects) is the difficulty of adding a new dynamic dispatch method to a bunch of existing classes.
I find I get more work done if I can rapidly prototype and iterate to find a good design, but lvclasses encourage more upfront planning and careful architecture because reworking is pretty painful. This need to plan ahead encourages the waterfall development model, which everyone loves to hate.
Jason
You've actually hit on the one thing "traditional" labview cannot do (or simulate). Run-time polymorphism (enabled by dynamic dispatch). However. there are very few cases that it is required (or desirable) unless, of course, you are using LVOOP. Then it's a must have to circumvent labviews requirement for design-time strict typing (another example of breaking Labviews in-built features to enable LVOOP). Well. That's how it seems to me at least. There may be some other reason, but in other languages you don't have "dynamic dispatch" type function arguments.
But aside from that. I never use waterfall (for software at least). I find an iterative approach (or "agile" as it is called now-a-days) much more useful and manageable. Sure the whole project (including hardware, mechanics, etc) will be waterfall (it's easier for management to see progress and you have to wait for materials) but within that at the macro level the software will be iterative with release milestones in the waterfall. As a result, the software is much quicker to react to changes than the other disciplines which means that the software is never a critical path until the end-of-project test phase (systems testing - you can't test against all the hardware until they've actually built it). At that point, however, the iterative cycle just gets faster with smaller changes since by that time you are (should be
) reacting to purely hardware integration problems so it's fairly straight forward to predict.
Destination: Task
in Object-Oriented Programming
Posted
Indeed. they "behave" correctly. As indeed my procedure yielded, for the reasons I argued previously about containers. But they aren't immune as I think you are suggesting here (remember class hell?).
Guess again
Yes.
That is effectively what you are doing when you use a Tree.vi. In fact. I would prefer that all VIs (and dependent s) included in a project are loaded when the project is loaded (i don't really see the difference between the "class" editor and the "project" editor and the class editor loads everything I think...maybe wrong). Of course this would be a lot less painful for many if you could "nest" projects.
[
Intent is irrelevant if the behaviour is consistent (as I was saying before about containers). Although I hadn't spotted the particular scenario in the example, treating a typedef'd cluster as just a container will yield the correct behaviour (note I'm saying behaviour here since both classes and typdef'd clusters can yield incorrect diagrams) as long as either
1. ALL vis are in memory.
OR
2. ALL vis are not in memory.
It's only that in your procedure some are and some aren't that you get a mismatch.
[
Well. there is already a suggestion on the NI Black hole site. To drop the simplicity of typedefs for a different paradigm I think is a bit severe and in these sorts of issues I like to take the stance of my customers (it's an issue....fix it
). But even that suggestion isn't bullet-proof. What happens if you rename a classes VI? 
[
I think it is probably due statements where you appear to assume that classic labview is highly coupled just because it's not OOP (I too was going to make a comment about this, but got bigged down in the typedef details
).
[
I don't think he's against anyone. Just picking up on the classic labview = highly coupled comments.
Once thing I've noticed with comments from other people (I'm impressed at their stamina
) is that most aren't writing OOP applications. I've already commented on encapsulation several times, and this seems to be its main use. If that is all it's used for, then it's a bit of a waste (they could have upgraded the event structure instead
). I wonder if we could do a poll?
I'm right behind you on this one. One thing about software is that pretty much anything s possible given enough time and resource. But to give NI their due, perhaps the "old timers" (like me
) just haven't been as vocal as the OOP community. Couple that with (i believe) some NI internal heavyweights bludgeoning OOP forward I think a few people are feeling a little bit "left out", Maybe it's time to to allocate a bit more resource back into core labview features that everyone uses.