Jump to content

ShaunR

Members
  • Posts

    4,856
  • Joined

  • Days Won

    293

Posts posted by ShaunR

  1. Great stuff.thumbup1.gif Lets see if Daklu is prepared to expand a little on it now you've shown the way wink.gif

    D'ya know what? I agree. Of course, some people seemed to think that's not necessarily a good thing. ;)

    I know biggrin.gif You never know. Maybe I'll get my arse kicked and version 2 will be a class with you named as a major contributor laugh.gif

    So, here you go. I did some one-handed bed coding and here's a basic mod of the transport library into LVOOP (2009).

    Some relevant comments:

    1. The transport example shows off the OOP advantage mainly through the inheritance property of OOP. This does not seem to be what your discussion was about, although the example does also reflect on your discussion.

    The discussion was originally going to be much broader. But we got bogged down on a specific facet.

    1. I didn't follow your API exactly. For example, your API leaked the TCP read mode enum out to the outer layer, where it's irrelevant. I didn't create a relevant equivalent in the API, since you didn't use it, but an accessor could be created for it and called before calling the Read VI

    The mode is relevant since the CRLF and buffered are very useful and vastly affect the behaviour. But for the purpose of this discussion it's irrelevant..

    1. I only implemented TCP and UDP.

    OK. They are 2 of the most popular.

    1. I recreated the server and client examples (they appear under the classes in the project).

    Yup. They work fine.

    1. You'll note that the inputs are now safer and easier to use (e.g. local port in the Open VI is now a U16 and does not mention BT).

    Yup.

    1. I changed the access scope on your VIs so I could use them without having to make a copy.

    Naturally.

    1. The VIs themselves are also simpler (See the UDP Write VI, for instance. In your VI, it takes wires from all kinds of places. In my VI, it's cleaner and clearly labeled).

    And some are just wrappers around my VI's wink.gif I, of course have to make a decision. Do I put certain "messy" ones in a sub-vi just to make the diagram cleaner or not (maybe I should have). You don't have that decision, since you have to create a new VI anyway.

    1. Whenever you make a change (e.g. add a protocol), you have to recompile the code which is called by your various programs (assuming you reuse it) and you have no way of guaranteeing that you didn't break anything. With the classes version, you don't have to touch the existing classes. Code which is not recompiled does not to be verified again.

    Good point.

    1. I didn't like all the choices you made (such as using a string to represent either a port or a service), but I kept some of them because I was not planning on doing a whole refactoring.

    What would you have done instead?

    1. Also, you should note that my implementation is far from complete. Ideally, each class would also have more private data (such as which ports were used) and accessors and do things like input validation and some error handling, but I only created the most basic structure.

    Of course. I'm not expecting a complete re-factor. In fact. It's probably to our advantage that there is only a partial re-factor since it mimics a seat-of-yer pants project. That way, as the new design evolves we will be able to see what issues come up, and what decisions we make to overcome them and, indeed,, what sacrifices we make (you have already made 1 wink.gif).

    To expand a bit on points 8 and 9 - You mentioned adding serial. That's a good example. What would happen if you now need to add serial, USB and custom-DLL-X to your protocols? You would have to touch all the existing VIs. You will be asked to recompile every single caller of the library (although with 2010 this is finally less of an issue). You would need to overload your string inputs to include even more meanings than they already do, etc. Contrast that with creating another child class and creating overrides for the relevant VIs - it becomes simpler. Also, with classes you can guarantee that no one will change the existing functionality by locking it with a password.

    Serial was mentioned for a very good reason. Can you think of why it isn't included? After all. It covers most other bases.

    For point 9, I would have a preferred a poly VI - one for service name and one for port. Internally, they can call a single utility VI to minimize code duplication, but for the API, you should have it as easy as possible. Currently, you place the burden of formatting the data correctly on the user of the library instead of constraining them where applicable. One example where your code will simply fail is if there's a service name which begins with a number. I have no idea if that's even possible, but if it is, your code will assume it's a port number and fail. This isn't actually an advantage of LVOOP (you could have created a poly VI using your library as well), but it would be easier in LVOOP.

    Yup. I was a bit lazy on that. I could have checked a bit harder. We'll call that a bug laugh.gif

    As you said, you could probably now compare the two by setting a series of tasks, such as adding more protocols or adding features, such as adding logging to every write and read.

    Actually, logging is a pretty good example. For you, adding logging is fairly simple - you add a path input to R/W VIs or you add it into the cluster. In the LVOOP code, adding an input to the R/W VIs would require touching all the classes, so I would probably do this by creating an accessor in the Transport class and simply calling it before the R/W VIs. This is one specific example where making a change to a class can be a bit more cumbersome, but it's probably worth it to avoid the other headaches.

    You actually have some advantage in this challenge in that you know and use the code whereas as I simply did some quick cowboy coding to adapt it, as opposed to planning the API beforehand. I also didn't do some needed stuff (see point 10). That said, the OOP version should still probably hold up, although I wouldn't unleash it as-is. Also, I doubt I will have more time to do actual coding, but I think this base is enough for some thought experiments.

    I should also point out that I'm more in your camp regarding the kinds of projects I work on, but that I generally agree more with Daklu in this thread.

    OK. I won't comment on your examples just now. Let's wait and see if Daklu is prepared to put some effort in.

    Indeed. I probably do know a little bit more since I've been through the design from start to finish. However. It was only 2 days from cigarette packet to release candidate so probably not much more. The only difficulty was UDP, everything else is pretty much a wrapper around in-built LV functions.

  2. I can't commit to anything right now. It's the busy season at work, christmas is upon us, my wife is recovering from major knee surgury, LapDog is behind schedule, I have presentations to prepare for the local users group, etc. Besides, I honestly do not see the point. Classes clearly provide me with a lot of added value while you get little benefit from them. What would be the goal of the exercise?

    Granted. It is a difficult time of year. The new year would be fine when things are less hectic.

    The goal? To to highlight the advantages and disadvantages of one paradigm over the other with a real-world practical example instead of esoteric rhetoric. Your Father may be bigger than my Father. Let's get the tape measure out yes.gif

    Additionally, comparing procedural code to OO code using a pre-defined, fixed specification will always make the procedural code look better. The abstraction of classes is a needless complication. The benefit of good OO code is in it's ability to respond to changing requirements with less effort.

    This I don't understand. You should always have a spec (otherwise how do you know what to create?). It's not fixed (what about adding serial?), only the interface is fixed (which you should understand since you are creating re-use libraries).

    In fact I chose it because it is particularly suitable for classes and IS a re-use component. It is very simple, well defined, obviously possible (since it already exists) and if it takes you more than a day, I'd be very surprised.. You talked previously about HW abstraction. Well here it is. You talk about re-use; It's used in Dispatcher and OPP Push File. It ticks all the boxes that you say OOP is good at, so I think it would be a good candidate for comparison.

    At the end, your lapdog thingy should work over TCPIP, UDP, IR and bluetooth as well. Wouldn't that be nice?

    If you think OOP is just for changing requirements. then you have clearly not understood it.wink.gif

  3. Renaming a class vi has no impact on mutation history.

    Well. I think from that little lot that it's pretty obvious that we've reached an impasse on that topic and perhaps it's time to expand the scope of the thread so at least it has some technical content again. But I will finish off off by drawing your attention to the LV help because it was obviously something you were not aware off.

    If you rename a class, LabVIEW considers it a new class, deletes the mutation history of the class, and resets the version number to 1.0.0.0.

    So. OOP is great. It's fixes all the ills in the world. It increases re-use so you only need 1 program to do anything anyone will ever want. So might I make a practical suggestion?

    There is a very simple library in the the Code Repository that would be very easy to convert to OOP and actually (I think) is ideally suited to it (we don't want anything too hard eh? tongue.gif ). Why not re-write it OOP stylee and we can use it as a basis for comparison between OOP and "Traditional" labview? Then we can plug it in to the other things in the CR that also use it and see what (if any) issues we come across in integration.

    Does that sound like something you can commit to?

  4. I realize you maybe were being smug owing to the window title, but that does indeed look like it could be LabVIEW 3, back when it was Mac only. Things were...different back then.

    Indeed. Sorry. Couldn't resist rolleyes.gif

    I could never figure out why people went for these crazy colour schemes. Then I worked for a defence contractor where there was a specification for software user interface colours. When I pointed out that it was a colour defined for "Cockpit" software because of the way colours were perceived through tinted visors and that VDU operators are unlikely to be using the visors. They said "Oh yeah" and carried on regardless biggrin.gif

  5. I'm not a fan of the Image type for the vision stuff either. I keep getting caught out even though I know how it works.

    A lot of vision stuff I do requires acquiring and image then creating various masks and applying them (often one after the other or in various combinations). The UI though, normally requires showing of original images and the results of the various stages of mask applications therefore you end up copying everywhere so as not to overwrite the originals or the intermediate results of a mask. It gets very messy wacko.gif

    But my pet hate is that you cannot wire a VISA refnum to an event case like you can with DAQmx. But more generally, the "probe window" introduced with LV2009.

    • Like 1
  6. No, they aren't immune to mistakes, and I didn't mean to imply they were. But they are more robust to common mistakes than typedeffed clusters. They correctly handle a larger set of editing scenarios than clusters.

    Marginally :P And on a very particular edge case issue that no-one else seems particularly bothered by tongue.gif

    No it isn't. I believe you're focusing on how it would fit into your specific workflow, not how it would work in the general case.

    Indeed. And I could probably level the same argument at you, since I do not consider my work flow atypical.

    Using Tree.vi loads only those vis you want loaded. Usually it's all the vis in the project. What happens if vis not in the project also depend on the cluster? What happens if people aren't even using projects? Should those non-project vis be loaded too? Fixing this issue using an auto-loading system requires those vis be loaded.

    What if you've deployed reusable code to user.lib and want to update a typedeffed cluster? You'll have to dig out *every* project that has used that code and put it on your machine before you can update the typedef. No thanks. How should LV react if it can't find some of the dependent vis? Disallow editing? Disconnect the typedef? Two-way dependencies between the typedef and the vi simplifies a certain subset of actions, but it creates far more problems than it solves.

    Lots of what-ifs in there laugh.gif. Projects haven't always existed and (quite often) I do a lot of editing without loading it. But that's just an old habit because projects haven't always been around and I'm just as comfortable with or without. Perhaps thats the reason I don't see many of the issues that others see since I'm less reliant on config dialogues, wizards and all the bells and whistles (sure I use them, but it's not necessary).

    User lib? Don't use it; I'm not a tool-writer. I don't have any problems re-using my re-usable stuff, never have. To me it's a bit of a storm in a tea-cup tongue.gif

    Not me. That's death for large projects. 6-10 minute load times are no fun.

    Thats quite funny. The last project I delivered was about 2000 VIs (excluding LV shipped). Only took about 1 minute to load and run in the dev environment (including the splash screen tongue.gif) . And that could run a whole machine.

    By "class editor" are you referring to the window that looks like a project window that pops up when you 'right click --> Open' a class from your project? There isn't much difference between them because it *is* a project window. Labview opens a new project context and loads the class into it for you. The reason the "class editor" loads everything is because loading a class (or any class members) automatically loads everything in the class. It's a feature of the class, not the "class editor."

    Well. that (I would say) is a feature of Labview. If the project also did it. then I'd be a lot happier.

    [Edit - I was wrong. It doesn't open the class in a new project context. Given that it doesn't, I confess I don't see an advantage of opening the class in a new window?]

    Okay, so since there's no practical way to ensure ALL dependent vis are in memory, the only way to guarantee consistent editing behavior is to not propogate typedef changes out to the vis at all. I'm sure the Labview community would support that idea. :lol:

    Sure there is a practical way; load everything in the project.

    They did fix it. They gave us classes. Any solution to this problem ultimately requires putting more protection around bundling/unbundling data types. That's exactly what classes do. Using classes strictly as a typed data container instead of typedeffed clusters is *not* a completely different paradigm. (There are other things classes can do, but you certainly don't have to use them.) You don't have to embrace OOP to use classes as your data types.

    Requiring a programmer to write extra code to mitigate a behaviour is not fixing anything. Suggesting that classes (OOP?) is a valid method to do so is like me saying that I've fixed it by using C++ instead.

    You get load errors, which is vastly more desirable than behind the scenes code changes to the cluster's bundle/unbundle nodes. Besides, the risk of renaming a vi is *far* better understood by LV users than the risk of editing a typedeffed cluster.

    I was specifically thinking about the fact it deletes the mutation history so being reliant on it not fool-proof.

    Disagree. (Surprised? :lol: )

    Never biggrin.gif But it's a bit cheeky re-writing my comment. wink.gif. I was not referring to typedefs at all. I was refering to LVOOP in it's entirety. From the other posters comments it just seems that the main usage that it's being put to is functional encapsulation. Of course it's not a "significant sample". Just surprising.

    Or maybe it's time for some "old timers" to discard their prejudices and see how new technologies can help them. ;):lol:

    I'm not prejudiced. I hate everybody biggrin.gif

    I have seen how it can help me. Like I said before; Lists and collections.wink.gif I've tried hard to see other benefits. But outside encapsulation I haven't found many that I can't realise much more quickly and easily in Delphi or C++ wink.gif

    You're not far off. Most of our projects are fairly small, maybe 1 to 2 months for a single dev. We do usually have 1 or 2 large projects (6-24 months) in progress at any one time. Since I build tools for a product development group, we don't have the advantage of well-defined requirements. They change constantly as the target product evolves. At the same time, we can't reset a tool project schedule just because the requirements changed. Product development schedules are based around Christmas releases. Needless to say, missing that date has severe consequences.

    During our rush (usually Sep-Mar) we have to build lots of tools very quickly that are functionally correct yet are flexible enough to easily incorporate future change requests. (The changes can be anything from updated an existing tool to support a new product, to using different hardware in the test system, to creating a new test using features from several existing tests, to creating entirely new test system.) Reusable component libraries give me the functional pieces to assemble the app. Classes give me the flexibility to adapt to changing needs.

    If it works for you, thats fine. It sounds like a variation on a theme (additions to existing......modification etc) That fits with what was saying before about only really getting re-use within or on variants of a project.

    It could if you built in a hardware abstraction layer. :P

    No it couldn't. Once machine might have cameras, one might have a paint head another might have Marposs probes whilst the other has Reneshaw (you could argure that those can be abstracted, but you still have to write them in the first place). The only real common denominator is NI stuff. And in terms of hardware, we've moved away from them. Thats not to say there is no abstraction (check out the Transport" library in the CR). It's just we generally abstract abstract further up (remember the diamonds?)

  7. OK - so I lied - I'm back for more :)

    And very welcome you are toothumbup1.gif

    I think this confuses highly coupled with statically loaded. I don't write code I consider highly coupled but I seldom if ever run into this kind of issue because I don't use much code deployed as dynamic libraries. I do have a bunch of classes and OO frameworks that I use and re-use but I use them by creating a unique project file for each deployed app and then adding those components that I need. So, I have a class library that is immutable (within the context of that project) that I drag into the project explorer - this is not a copy of the code, just a "link" to where the class is defined. Now, if I use any of that class in any capacity in that project, the class gets loaded into memory (and if I'm not using it, it shouldn't be there). But, the only "coupling" between the classes I use is that they are all called at some point by something in my application-specific project. My classes often include public typedefs for creating blocks of data that benefit from logical organization. But these typedefs get updated across all callers because of the specific project (not a VI tree, in this case). I realize the project doesn't force a load into memory, but once again, using the class does and that's the only reason they're in the project.

    I'm still forced to deal with other users of the classes that might not be loaded, but that's what an interface spec is for - any changes to a public API shouldn't be taken lightly. The big difference is that all my code is typically statically linked so everything the project needs is there at compile and build time. But this does NOT mean it's highly coupled as each class has a clear interface, accessors, protected and private methods, and so on.

    Just to help derail this thread, I'll state that I'm not a big fan of using the plug-in library architecture just because you can. Sometimes it's really helpful, but if an application can be delivered as a single executable (and that includes with 8.6 style support folders) then I find it much easier to maintain since I don't get LabVIEW's version of DLL hell. I don't care if my installer is 160 Mb or the installed app is 60 Mb. The performance under LabVIEW's memory manager is more than adequate.

    Mark

    I think the main difference between myself and Daklu is that I write entire systems whereas Daklu is focused on toolchains. As such our goals are considerably different. Re-use, for example, isn't the be-all and end-all and is only a small consideration for my projects in in the scheme of things. However in Daklus case, it saves him an enormous amount of time and effort. A good example of this is that I spend very little time automating programming processes because I'm building bespoke systems so each is a one off and takes (typically) 9 months to design and build. Contrast that with Daklus 2 week window and it becomes obvious where priorities lie. Thats not to say that re-use is never a consideration, it's just the focus is different. My re-use tends to be at a higher level (around control, data logging and and comms). You might consider that I would be a "customer" of Daklu. (Hope I've got Daklus job spec right laugh.gif )

    As such, I'm in a similar position to you in that the output tends to be monolithic. It cannot run on other machines with different hardware so I don't need "plug-and-pray" features

  8. Technically you may be correct, though it still wouldn't give you the user experience it appears you think it would. If you can tolerate my lengthy explanation I think you'll see why.

    First, the reason classes behave correctly in the example I posted is because all the vis that bundle/unbundle the data are loaded into memory during the edits.

    Indeed. they "behave" correctly. As indeed my procedure yielded, for the reasons I argued previously about containers. But they aren't immune as I think you are suggesting here (remember class hell?).

    NI has taken a lot of flak from users (me included) for the load-part, load-all functionality built into classes, but it was the correct decision. So the first question is, do you want Labview to automatically load all the vis that bundle/unbundle the typedeffed cluster when you open the ctl?

    I'll go out on a limb and guess your answer is "no." (Besides, implementing that would be a problem. There really isn't a good way for the typedef to know which vis bundle/unbundle the data.)

    Guess again wink.gif

    Yes. yes.gif That is effectively what you are doing when you use a Tree.vi. In fact. I would prefer that all VIs (and dependent s) included in a project are loaded when the project is loaded (i don't really see the difference between the "class" editor and the "project" editor and the class editor loads everything I think...maybe wrong). Of course this would be a lot less painful for many if you could "nest" projects.

    [

    So for this behavior to exist, the ctl needs to maintain a version number and mutation history of all the edits and a that have been made to it. That (theoretically) would allow the vi that bundles/unbundles the cluster, the next time it loads, to compare its cluster version against the current cluster version and step through the updates one at a time until all the changes have been applied.

    As a matter of fact, NI has already implemented this exact scheme in classes. Not to make sure the bundle/unbundle nodes are updated correctly (that's already taken care of by the auto-loading behavior,) but for saving and loading objects to disk. Consider the following scenario:

    1. Your application creates an object containing some data, flattens it, and saves it to disk.

    2. Somebody edits the class cluster, perhaps renaming or reordering a few elements.

    3. Your updated application attempts to load the object from disk and finds the object's data cluster on disk no longer matches the class' cluster definition.

    This is where the class' mutation history kicks in. The class version number is stored on disk with the object data, so when the data is reloaded LV can step through and apply the updates one at a time, until the loaded object version matches the current class version. Sounds perfect, yes?

    As it turns out, automatic class mutation is very error prone and subject to fairly esoteric rules. It is risky enough that most developers write their own serialization methods to manually flatten objects for saving to disk rather than letting LV automatically flatten them. This is not a failure on NI's part. It is simply because there is no way for LV to definitively discern the programmer's intent based on a series of edits.

    Suppose I do the following edits to the class cluster:

    - Remove string control "String"

    - Add string control "String"

    - Rename "String" to "MyString"

    Was my intent for the data that used to be stored in "String" to now be stored in "MyString?" Possibly. Or was my intent to discard the saved data that used to be stored in "String" and create an entirely new field named "MyString?" That's possible too. Both scenariors are plausible. There's simply no way LV can automatically figure out what you want to happen, so it makes reasonable educated guesses. Unfortunately, it guesses wrong sometimes, and when that happens functionality breaks.

    Intent is irrelevant if the behaviour is consistent (as I was saying before about containers). Although I hadn't spotted the particular scenario in the example, treating a typedef'd cluster as just a container will yield the correct behaviour (note I'm saying behaviour here since both classes and typdef'd clusters can yield incorrect diagrams) as long as either

    1. ALL vis are in memory.

    OR

    2. ALL vis are not in memory.

    It's only that in your procedure some are and some aren't that you get a mismatch.

    [

    There really isn't a good way for the typedef to know which vis bundle/unbundle the data.)

    Giving clusters a mutation history isn't a real solution to this problem. It will just open another can of worms that has even more people screaming at NI to fix the "bug." The solution is for us, as developers, to recognize when our programming techniques, technologies, and patterns have reached their limitations. If we have requirements that push those things beyond their limits, the onus is on us to apply techniques, technologies, and patterns that are better suited to achieving the requirements.

    [/soapbox]

    Well. there is already a suggestion on the NI Black hole site. To drop the simplicity of typedefs for a different paradigm I think is a bit severe and in these sorts of issues I like to take the stance of my customers (it's an issue....fix it biggrin.gif ). But even that suggestion isn't bullet-proof. What happens if you rename a classes VI? wink.gif

    [

    I'm not sure why you think that. Most of the code I write is statically linked, yet I consider it reasonably well decoupled.

    I think it is probably due statements where you appear to assume that classic labview is highly coupled just because it's not OOP (I too was going to make a comment about this, but got bigged down in the typedef details tongue.gif ).

    [

    In general, nobody should use a particular technique "just because they can." If you're going to spend time implementing it you ought to have a reason for doing so. But I'm not advocating a plug in architecture so I'm not sure who you're protesting against...?

    I don't think he's against anyone. Just picking up on the classic labview = highly coupled comments.

    Once thing I've noticed with comments from other people (I'm impressed at their stamina biggrin.gif ) is that most aren't writing OOP applications. I've already commented on encapsulation several times, and this seems to be its main use. If that is all it's used for, then it's a bit of a waste (they could have upgraded the event structure instead biggrin.gif ). I wonder if we could do a poll?

    [/size]

    I agree that it may not technically be a bug if there's some spec somewhere at NI that says this behavior is expected, but it's still a horrible design defect in the LabVIEW development environment. There's no reason bundle by Name and Unbundle by Name couldn't keep track of the names and break the VI if there were any change in the names, sort of like relinking subvis. The amount work to fix hundreds of broken subvis pales in comparison to the havoc wreaked by having your tested production code silently change its functionality.

    We've learned to be more vigilant about cluster name and order changes, and have added more unit testing, but there's really no excuse for NI permitting this. Re the current discussion, I don't believe the existence of this horrible design bug ought to inform the choice of whether to use OOP or not, though given that NI seems reluctant to fix it, maybe it does factor in.

    Jason

    I'm right behind you on this one. One thing about software is that pretty much anything s possible given enough time and resource. But to give NI their due, perhaps the "old timers" (like me cool.gif ) just haven't been as vocal as the OOP community. Couple that with (i believe) some NI internal heavyweights bludgeoning OOP forward I think a few people are feeling a little bit "left out", Maybe it's time to to allocate a bit more resource back into core labview features that everyone uses.

  9. There are a number of ways you can go about it. It depends on how you want to organise the data and what you want to do with it on screen.

    If you are only going to show the last five minutes, then you can use a history chart/1 A 1Khz sample rate means about 300,000 samples (plot points) per channel which is a lot,so you will probably t have to decimate (plot every n points). However. It's worth bearing in mind, that you probably won't have 300,000 pixels in your graph anyway, so plotting them all is only really useful if you are going to allow them to zoom in. There are other ways (JGCodes suggestion is one, queues and a database are another). But that's the easiest and the hassle free way with minimum coding.

    Ideally you want to stream the data into a nice text file - just as you would see it in an array (I use comma or tab delimited when I can). Then you can load it up in a text editor or spreadsheet and it will make sense and you won't need to write code to interpret it just to read it. You can always add that later if it's taking too long to load in the editor. If the messages are coming in 1,2,3,4,1234 etc then that's not a problem. However, it becomes a little more difficult if they are coming in ad-hock and you will need to find a way of re-organising it before saving so your text file table lines up. Hope you have a big hard-disk biggrin.gif

    Oh. And one final thought. Kill the Windows Indexing service (if you are using windows that is). You don't want to get 4 hours in and suddenly get a "file in use error" wink.gif

  10. While the nodes I spoke about probably make calls to the Windows API functions under Windows, they are native nodes (light yellow) and supposedly call on other platforms the according platform API for dealing with Unicode (UTF8 I believe) to ANSI and v.v. The only platforms where I'm pretty sure they either won't even load into or if they do will likely be NOPs are some of the RT and embedded platforms

    Possible fun can arise out of the situation that the Unicode tables used on Windows are not exactly the same as on other platforms, since Windows has slightly diverged from the current Unicode tables. This is mostly apparent in collation which influences things like sort order of characters etc. but might be not a problem in the pure conversion. This however makes one more difficulty with full LabVIEW support visible. It's not just about displaying and storing Unicode strings, UTF8 or otherwise, but also about many internal functions such as sort, search etc. which will have to have proper Unicode support too, and because of the differences in Unicode tables would either end up to have slightly different behavior on different platforms or they would need to incorporate their own full blown Unicode support into LabVIEW such as the ICU library to make sure all LabVIEW versions behave the same, but that would make them behave differently on some systems to the native libraries.

    Indeed. (to all of it). But its rather a must now as opposed to, say 5 years ago. Most other high level languages now have full support (even Delphi finally...lol). I haven't been critical about this so far, because NI came out with x64. As a choice of x64 or Unicode, My preference was the former and I appreciate the huge amount of effort that must have been. But I'd really like to at least see something on the roadmap.

    Are these the VIs you are talking about?

    These I've tried. They are good for getting things in and out of labview (e.g files or internet) but no good for display on the UI. For that the ASCII needs to be converted to UCS-2 BE and the Unicode needs remain as it is, ( UTF8 doesn't cater for that). And that must only happen if the ini switch is in otherwise it must be straight UTF8.

    The beauty of UTF8 is that it's transparent for ASCII, therefore inbuilt LV functions work fine. I use a key as a lookup for the display string, which is ok as long as it is an ASCII string. I can live with that biggrin.gif The real problem is that once the ini setting is set (or a control is set to Force Unicode after it is set) it cannot be switched back without exiting labview or recreating the control. So on-the fly switching is only viable if, when it is set, ASCII can be converted. Unless you can think of a better way?

  11. <snip>laugh.gif

    Ahhhhh. I see what you are getting at now.lightbulb.gif The light has flickered on (I have a neon one biggrin.gif)

    I must admit, I did the usual. Identify the problem and find a quicker way to replicate it (I thought you were banging on about an old "feature" and I knew how to replicate it oops.gif). That's why I didn't follow your procedure exactly for the vid (I did the first few times to see what the effect was and thought "Ahhh, that old chestnut"). But having done so I would actually say the class was easier since I didn't even have to have any VI's open laugh.gifSo it really is a corner of a corner case Have you raised the Car yet? biggrin.gif

    But it does demonstrate (as you rightly say) a little understood effect.. I've been skirting around it for so long; I'd forgotten it. I didn't understand why it did it (never bothered like with so many nuancies in LV), I only knew it could happen and modified my workflow so it didn't happen to me :)

    But in terms of effort defending against it. Well. How often does it happen? I said before, I've never seen it (untrue of course given what I've said before) so is it something to get out nickers in a twist about? A purist would say yes. An accountant would say "how much does it cost"? biggrin.gif Is awareness enough (in the same way I'm fidgety about windows indexing and always turn it off?). What is the trade off between detecting that bug and writing lots of defensive code or test cases that may be prone bugs themselves? I think it's the developers call. If it affects both LVOOP and traditional labview then it should be considered a bug or at the very least have a big red banner in the help yes.gif

    Still going to use typedefs though biggrin.gif

  12. Thanks jgcode & Matt,

    I'm going to monitor the suspension of a car with 3 lineaire potetion meters. The important of all is that I have sample rates of 1 kHz. Higher is even better.

    So the array will be quick huge. The LabVIEW program has to run the hole day on my laptop.

    I'm getting my info trough UDP. There is a CAN network on and with a Analog2CAN converter i'm reading my potentio meters.

    I get about 200 ID's in my program and I'm putting them in several arrays, but is this the best, less failure, fast way?

    Is waveform then still better?

    Looking forward for your answers,

    ____________

    Michael ten Den

    You are probably better off logging to a file since you will have a huge dataset.

  13. Hi,

    I'm a theoretical OOPer. My problem is that I'm stuck in 7.1. and really like uml... :D

    So here my therotical 'insights'.

    There is a line of developement in other languages as well from the cluster (record, struct) to the class. In this developement of programming languages, different features were added, merged, dismissed as evolution plays. Using a type def, you introduce the class(=*.ctl)/object(=wire) abstraction already. With the Action Engine, we got encapsulation (all data is private) and methods (including accessors). With LVOOP we have inheritance. Still LVOOP isn't supporting things that other languages do since ages (interfaces, abstract classes and methods). But on the other hand it allows for by-val implementation (objects that down have an identity) as well as by-ref.

    I severly consider LVOOP unfinished, because it doesn't allow you to draw code the same as you do non-LVOOP code with wires and nodes. It's mainly some trees and config-windows.

    But also I don't think the evolution of other OOP languages is not yet finished. See uml, where you only partially describe the system graphically, which means you never can create a compilable code (partially undefined behaviour). Also uml still has a lot of text statements (operations and properties are pure BNF text-statements).

    So the merging towards a graphical OOP code is still work in progress.

    Let's get practical. On my private project I have to deal with OOP designs (uml/xmi and LV-GObjects). One issue that isn't possible to handle with type def/AE is to represent the inheritance. Let's say I want to deal with control (parent), numeric control (child) and string control (child) and have some methods to serialize them to disk.

    For a generic approach I started using variants. The classID (string) is changed to a variant. All properties I read from the Property nodes are placed as Variant attributes. This can even be nested, e.g. for dealing with lables (the get serialized as decoration.text as an object of their own and the set as attribute). Wow, I have compositions! :yes:

    Well, I lose all compile time safety. But I wonder what I'd get when combining it with AEs and some 'plugin' way to get dynamic dispatch.

    Ahh, wasn't C++ written with C?

    Felix

    I'm not sure I would agree that OOP is still evolving (some say its a mature methodology). But I would agree LVOOP is probably unfinished. The question is as we are already 10 years behind the others, will it be finished before the next fad wink.gif Since I think that we are due for another radical change in program designing (akin to text vs graphical was or structured vs OOP). It seems unlikely.

    As for a plug-in way of invoking AEs. Just dynamically load them. If you make the call something like "Move.Drive Controller" or "Drive Controller.Move" (depending on how you like it), strip the "Move and use if for the action and load your "Drive Controller".vi. But for me, compile time safety is a huge plus for using labview.

  14. Uhh... disconnecting the constants from the typedef doesn't fix the problem. The only output that changed is code path 2, which now outputs the correct value instead of an incorrect value at the cost of code clarity. I can easily imagine future programmers thinking, "why is this unbundling 's1' instead of using the typedef and unbundling 'FirstName?'" And doesn't disconnecting the constant from the typedef defeat the purpose of typedeffing clusters in the first place? You're going to go manually update each disconnected constant when you change the typedef? What happened to single point maintenance?

    No it doesn't defeat the object of typedefs.

    Typedef'd clusters (since you are so hung-up on just clusters rolleyes.gif) are typically used to bundle and unbundle controls/indicators so compound/complex controls so we can have nice neat wires to and from VIs biggrin.gif. Additionally they can be used to add clarity an easy method to select individual components of the compund control.wink.gif The benefit as opposed to normal clusters is that a change propagates through the entire application so there is no need to go to every VI and modify a control/indicator just because you change the cluster. I (personally) have never uses typedef'd constants (or ever seen them used the way you are trying to use them) except as a datatype for bundle by name. As I said previously, it is a TypeDef not a Datadef.

    Regardless, the constants was something of a sideshow anyway... like I said, I just discovered it today. The main point is what happens to the bundle/unbundle nodes wired to and from the conpane controls. (Paths 1, 3, 5, and 6.) Your fix didn't change those at all.

    Results from Typedef Heaven:

    <snip>

    Well. I'm not sure what you are seeing. Here is a vid of what happens when I do the same.

    http://www.screencast.com/users/Imp0st3r/folders/Jing/media/6d552790-5293-4b47-85bc-2fcb1402b085

    All the names are John (which I think was the point). Sure the bundles change, so now the 0th container is labeled "LastName". But its just a label for the container (could have been z5ww2qa). But because you are imposing ordered meaning on the data you are supplying, I think you are expecting it to read your intentions and choose an appropriate label to match your artificially imposed meaningful data.You will have noticed that when you change the cluster order (again something I don't think most people do - but valid), the order within the cluster changed too (Lastname is now at the top). So what you have done is change into which container the values are stored. they are both still stored. They will all be taken out of the container that you stored them in. Only you are now storing the first name (data definition) in the last name (container).

    If you are thinking this will not happen with your class....then how about this?l.

    http://www.screencast.com/users/Imp0st3r/folders/Jing/media/672c5406-a56d-4c7a-a177-ab31a3c0cd15

    I see your point with respect to the cluster constants, though as I mentioned above I'm not convinced disconnecting the constant from the typedef is a good general solution to that problem.

    What problem? biggrin.gif I think you are seeing a typedef as more than it really is and you have probably found an edge case which seems to be an issue for your usage/expectation. It is just a control. It even has a control extension. It's no more an equivalent to a class than it is to a VI. The fact you are using a bundle/unbundle is because you are using a compound control (cluster) andt has little to do with typedefs. Making such a control into a typedef just means we don't have to go to every VI front panel and modify it manually when we change the cluster.

    Specification? You get a software spec? And here I thought a "spec document" was some queer form of modern mythology. (I'm only half joking. We've tried using spec documents. They're outdated before the printer is done warming up.)

    Yup. And if one doesn't exist, I write one (or at least title a document "Design Specification" biggrin.gif) by interrogating the customer But mainly our projects are entire systems and you need one to prove that the customers requirements have been met by the design. Seat-of-yer-pants programming only works with a lot of experience and a small amount of code.

    It's past the stupid hour in my timezone... I don't understand what you're asking.

    My concern with typedeffed enums is the same concern I have with typedeffed clusters. What happens to a preset enum constant or control on an unloaded block diagram when I make different kinds of changes to the typedef itself? (More precisely, what happens to the enum when I reload the vi after making the edits?)

    It's nothing to do with in memory or not (I don't think). What you are seeing is the result of changing the order of the components within the cluster. An enum isn't a compound component so there is no order associated.

    Using a class as a protected cluster is neither complex nor disposes of data flow. There are OO design patterns that are fairly complex, but it is not an inherent requirement of OOP.

    So your modules either do not expose typedefs as part of their public interface or you reuse them in other projects via copy and paste (and end up with many copies of nearly identical source code,) right?

    Nope. The source is in SVN. OK you have to have a copy of the VIs on the machine you are working on in the same way that you have to have the class vis present to be able to use them. So I'm not really sure what you are getting at here.

    A module that might expose a typedef would be an action engine. I have a rather old drive controller, for example, that has an enumerated typedef

    with Move In, Move Out, Stop, Pause, Home. If I were to revisit it then I would probably go for a polymorphic VI instead purely because it would only expose the controls for that particular function (you don't need a distance parm for Home or Stop for example) rather than just ignoring certain inputs.But its been fine for 3 years and if it "'aint broke, don't fix it" tongue.gif

    My fault for not being clear. I meant multiple instances of a typedeffed cluster. I was freely (and confusingly) using the terms interchangably. Dropping two instances of the same class cube on the block diagram is essentially equivalent to dropping two instances of a typedeffed cluster on the block diagram. Each of the four instances on the block diagram has it's own data space that can be changed independently of the other three.

    I suppose. But it's not used like that and I cannot think of a situation where you would want to (what would be the benefit?) Its used either as a control, or as a "Type Definition" for a bundle-by-name. It's a bit like laying down a queue reference constant. Sure you can. But why would you? Unless of course you want to impose "Type" or cast it.

    No. Based on this and a couple other comments you've made, it appears you have a fundamental misunderstanding of LVOOP. Labview classes are not inherently by-ref. You can create by-ref or singleton classes using LVOOP, but data does not automatically become by-ref just because you've put it in a class. Most of the classes I create are, in fact, by-val and follow all the typical rules of traditional sequential dataflow. By-ref and singleton functionality are added bonuses available for when they are needed to meet the project's requirements.

    Maybe I don't. worshippy.gif But I do know "by-val" doesn't mean it's "data-flow" any more than using a "class" means "object oriented". Like you said. It's up to the programmer. It's just that the defaults are different. In classic labview, the default is implicit state with single instances. In LVOOP its multiple instance with managed state. Either can be made to do the other. It's just the amount of work to turn one into the other. Well that's how it seems to a heathen like me wink.gif

  15. There used to be a library somewhere on the dark side that contained them. It was very much like my unicode.llb that I posted years ago and which called the Windows WideCharToMultibyte and friends APIs to do the conversion but also had extra VIs that were using those nodes. And for some reasons there was no password, eventhough they usually protect such undocumented functions strictly.

    I'll try to see if I can find something either on the fora or somewhere on my HD.

    Otherwise, using Scripting possibly together with one of the secret INI keys allows one to create LabVIEW nodes too, and in the list of nodes these two show up too.

    I already have my own vis that convert using the windows api calls. I was kind-a hoping they were more than that sad.gif. I originally looked at it all when I wrote PassaMak, but decided to release it without Unicode support (using the api calls) to maintain cross-platform. Additionally I was put off by the hassles with special ini settings, the pain of handling standard ASCII and a rather woolly dependency on code pages - it seemed a one OR the other choice and not guaranteed to work in all cases.

    As with most of my stuff, I get to re-visit periodically and recently started to look again with a view to using UTF-8 which has a the capability of identifying ASCII and Unicode chars (regardless of code pages) which should make it fairly bulletproof and boil down to basically inserting bytes (for the ASCII chars) if the ini is set and not if it isn't. Well that's the theory at least, and so far, so good. Although I'm not sure what LV will do with 3 and 4 byte chars and therefore what to do about it. That's the next step when I get time.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.