-
Posts
4,914 -
Joined
-
Days Won
301
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
The difference is whether the DLL is "Thread Safe" or not. Orange means it runs in a single thread (the UI Thread and there is only 1). Nicotine coloured and it can be run in any thread that LV decides it wants too. It's best to use the UI thread if you are not sure. Side effects can be anything from crashes, strange behaviour, or erroneous calculation errors under certain conditions.
-
Get File path when opening saved data
ShaunR replied to Michael ten Den's topic in Database and File IO
If you mean you want it to remember a fixed location. then you will have to either save the path somewhere (and load it when tha app starts), hard-wire a path into the file open, or save a control as a default value (right click on control then select "Data operations>>Make Current Value Default").. The reason it is asking you is because you have not specified a path into the file open (bear in mind that controls are reset to defaults when you open a vI or exe for the first time). -
Self-indexing for loops. Beats ANY other language hands down as they don't have an equivalent.and have to put length checks all over the place and usually get it wrong .
-
Tough call. Are thay all programmers? What languages?
-
Do you reckon this is going to be longer than the "Like" thread Everyone loves a good moan +1 for any user interface stuff
-
Great stuff. Lets see if Daklu is prepared to expand a little on it now you've shown the way I know You never know. Maybe I'll get my arse kicked and version 2 will be a class with you named as a major contributor The discussion was originally going to be much broader. But we got bogged down on a specific facet. The mode is relevant since the CRLF and buffered are very useful and vastly affect the behaviour. But for the purpose of this discussion it's irrelevant.. OK. They are 2 of the most popular. Yup. They work fine. Yup. Naturally. And some are just wrappers around my VI's I, of course have to make a decision. Do I put certain "messy" ones in a sub-vi just to make the diagram cleaner or not (maybe I should have). You don't have that decision, since you have to create a new VI anyway. Good point. What would you have done instead? Of course. I'm not expecting a complete re-factor. In fact. It's probably to our advantage that there is only a partial re-factor since it mimics a seat-of-yer pants project. That way, as the new design evolves we will be able to see what issues come up, and what decisions we make to overcome them and, indeed,, what sacrifices we make (you have already made 1 ). Serial was mentioned for a very good reason. Can you think of why it isn't included? After all. It covers most other bases. Yup. I was a bit lazy on that. I could have checked a bit harder. We'll call that a bug OK. I won't comment on your examples just now. Let's wait and see if Daklu is prepared to put some effort in. Indeed. I probably do know a little bit more since I've been through the design from start to finish. However. It was only 2 days from cigarette packet to release candidate so probably not much more. The only difficulty was UDP, everything else is pretty much a wrapper around in-built LV functions.
-
Granted. It is a difficult time of year. The new year would be fine when things are less hectic. The goal? To to highlight the advantages and disadvantages of one paradigm over the other with a real-world practical example instead of esoteric rhetoric. Your Father may be bigger than my Father. Let's get the tape measure out This I don't understand. You should always have a spec (otherwise how do you know what to create?). It's not fixed (what about adding serial?), only the interface is fixed (which you should understand since you are creating re-use libraries). In fact I chose it because it is particularly suitable for classes and IS a re-use component. It is very simple, well defined, obviously possible (since it already exists) and if it takes you more than a day, I'd be very surprised.. You talked previously about HW abstraction. Well here it is. You talk about re-use; It's used in Dispatcher and OPP Push File. It ticks all the boxes that you say OOP is good at, so I think it would be a good candidate for comparison. At the end, your lapdog thingy should work over TCPIP, UDP, IR and bluetooth as well. Wouldn't that be nice? If you think OOP is just for changing requirements. then you have clearly not understood it.
-
Ooooh. Almost forgot. Re-entrant VI's and the cool improvements to them over time (cloning, place inside self etc.)
-
Especially as you used to be able to change probe properties and resize the controls (like being able to view strings as hex). Or maybe I dreamt that
-
Well. I think from that little lot that it's pretty obvious that we've reached an impasse on that topic and perhaps it's time to expand the scope of the thread so at least it has some technical content again. But I will finish off off by drawing your attention to the LV help because it was obviously something you were not aware off. So. OOP is great. It's fixes all the ills in the world. It increases re-use so you only need 1 program to do anything anyone will ever want. So might I make a practical suggestion? There is a very simple library in the the Code Repository that would be very easy to convert to OOP and actually (I think) is ideally suited to it (we don't want anything too hard eh? ). Why not re-write it OOP stylee and we can use it as a basis for comparison between OOP and "Traditional" labview? Then we can plug it in to the other things in the CR that also use it and see what (if any) issues we come across in integration. Does that sound like something you can commit to?
-
Indeed. Sorry. Couldn't resist I could never figure out why people went for these crazy colour schemes. Then I worked for a defence contractor where there was a specification for software user interface colours. When I pointed out that it was a colour defined for "Cockpit" software because of the way colours were perceived through tinted visors and that VDU operators are unlikely to be using the visors. They said "Oh yeah" and carried on regardless
-
I'm not a fan of the Image type for the vision stuff either. I keep getting caught out even though I know how it works. A lot of vision stuff I do requires acquiring and image then creating various masks and applying them (often one after the other or in various combinations). The UI though, normally requires showing of original images and the results of the various stages of mask applications therefore you end up copying everywhere so as not to overwrite the originals or the intermediate results of a mask. It gets very messy But my pet hate is that you cannot wire a VISA refnum to an event case like you can with DAQmx. But more generally, the "probe window" introduced with LV2009.
-
I'm. infamous
-
Polymorphic VIs.
-
Version 3.0
-
Marginally And on a very particular edge case issue that no-one else seems particularly bothered by Indeed. And I could probably level the same argument at you, since I do not consider my work flow atypical. Lots of what-ifs in there . Projects haven't always existed and (quite often) I do a lot of editing without loading it. But that's just an old habit because projects haven't always been around and I'm just as comfortable with or without. Perhaps thats the reason I don't see many of the issues that others see since I'm less reliant on config dialogues, wizards and all the bells and whistles (sure I use them, but it's not necessary). User lib? Don't use it; I'm not a tool-writer. I don't have any problems re-using my re-usable stuff, never have. To me it's a bit of a storm in a tea-cup Thats quite funny. The last project I delivered was about 2000 VIs (excluding LV shipped). Only took about 1 minute to load and run in the dev environment (including the splash screen ) . And that could run a whole machine. Well. that (I would say) is a feature of Labview. If the project also did it. then I'd be a lot happier. Sure there is a practical way; load everything in the project. Requiring a programmer to write extra code to mitigate a behaviour is not fixing anything. Suggesting that classes (OOP?) is a valid method to do so is like me saying that I've fixed it by using C++ instead. I was specifically thinking about the fact it deletes the mutation history so being reliant on it not fool-proof. Never But it's a bit cheeky re-writing my comment. . I was not referring to typedefs at all. I was refering to LVOOP in it's entirety. From the other posters comments it just seems that the main usage that it's being put to is functional encapsulation. Of course it's not a "significant sample". Just surprising. I'm not prejudiced. I hate everybody I have seen how it can help me. Like I said before; Lists and collections. I've tried hard to see other benefits. But outside encapsulation I haven't found many that I can't realise much more quickly and easily in Delphi or C++ If it works for you, thats fine. It sounds like a variation on a theme (additions to existing......modification etc) That fits with what was saying before about only really getting re-use within or on variants of a project. No it couldn't. Once machine might have cameras, one might have a paint head another might have Marposs probes whilst the other has Reneshaw (you could argure that those can be abstracted, but you still have to write them in the first place). The only real common denominator is NI stuff. And in terms of hardware, we've moved away from them. Thats not to say there is no abstraction (check out the Transport" library in the CR). It's just we generally abstract abstract further up (remember the diamonds?)
-
And very welcome you are too I think the main difference between myself and Daklu is that I write entire systems whereas Daklu is focused on toolchains. As such our goals are considerably different. Re-use, for example, isn't the be-all and end-all and is only a small consideration for my projects in in the scheme of things. However in Daklus case, it saves him an enormous amount of time and effort. A good example of this is that I spend very little time automating programming processes because I'm building bespoke systems so each is a one off and takes (typically) 9 months to design and build. Contrast that with Daklus 2 week window and it becomes obvious where priorities lie. Thats not to say that re-use is never a consideration, it's just the focus is different. My re-use tends to be at a higher level (around control, data logging and and comms). You might consider that I would be a "customer" of Daklu. (Hope I've got Daklus job spec right ) As such, I'm in a similar position to you in that the output tends to be monolithic. It cannot run on other machines with different hardware so I don't need "plug-and-pray" features
-
Indeed. they "behave" correctly. As indeed my procedure yielded, for the reasons I argued previously about containers. But they aren't immune as I think you are suggesting here (remember class hell?). Guess again Yes. That is effectively what you are doing when you use a Tree.vi. In fact. I would prefer that all VIs (and dependent s) included in a project are loaded when the project is loaded (i don't really see the difference between the "class" editor and the "project" editor and the class editor loads everything I think...maybe wrong). Of course this would be a lot less painful for many if you could "nest" projects. [ Intent is irrelevant if the behaviour is consistent (as I was saying before about containers). Although I hadn't spotted the particular scenario in the example, treating a typedef'd cluster as just a container will yield the correct behaviour (note I'm saying behaviour here since both classes and typdef'd clusters can yield incorrect diagrams) as long as either 1. ALL vis are in memory. OR 2. ALL vis are not in memory. It's only that in your procedure some are and some aren't that you get a mismatch. [ Well. there is already a suggestion on the NI Black hole site. To drop the simplicity of typedefs for a different paradigm I think is a bit severe and in these sorts of issues I like to take the stance of my customers (it's an issue....fix it ). But even that suggestion isn't bullet-proof. What happens if you rename a classes VI? [ I think it is probably due statements where you appear to assume that classic labview is highly coupled just because it's not OOP (I too was going to make a comment about this, but got bigged down in the typedef details ). [ I don't think he's against anyone. Just picking up on the classic labview = highly coupled comments. Once thing I've noticed with comments from other people (I'm impressed at their stamina ) is that most aren't writing OOP applications. I've already commented on encapsulation several times, and this seems to be its main use. If that is all it's used for, then it's a bit of a waste (they could have upgraded the event structure instead ). I wonder if we could do a poll? I'm right behind you on this one. One thing about software is that pretty much anything s possible given enough time and resource. But to give NI their due, perhaps the "old timers" (like me ) just haven't been as vocal as the OOP community. Couple that with (i believe) some NI internal heavyweights bludgeoning OOP forward I think a few people are feeling a little bit "left out", Maybe it's time to to allocate a bit more resource back into core labview features that everyone uses.
-
There are a number of ways you can go about it. It depends on how you want to organise the data and what you want to do with it on screen. If you are only going to show the last five minutes, then you can use a history chart/1 A 1Khz sample rate means about 300,000 samples (plot points) per channel which is a lot,so you will probably t have to decimate (plot every n points). However. It's worth bearing in mind, that you probably won't have 300,000 pixels in your graph anyway, so plotting them all is only really useful if you are going to allow them to zoom in. There are other ways (JGCodes suggestion is one, queues and a database are another). But that's the easiest and the hassle free way with minimum coding. Ideally you want to stream the data into a nice text file - just as you would see it in an array (I use comma or tab delimited when I can). Then you can load it up in a text editor or spreadsheet and it will make sense and you won't need to write code to interpret it just to read it. You can always add that later if it's taking too long to load in the editor. If the messages are coming in 1,2,3,4,1234 etc then that's not a problem. However, it becomes a little more difficult if they are coming in ad-hock and you will need to find a way of re-organising it before saving so your text file table lines up. Hope you have a big hard-disk Oh. And one final thought. Kill the Windows Indexing service (if you are using windows that is). You don't want to get 4 hours in and suddenly get a "file in use error"
-
Indeed. (to all of it). But its rather a must now as opposed to, say 5 years ago. Most other high level languages now have full support (even Delphi finally...lol). I haven't been critical about this so far, because NI came out with x64. As a choice of x64 or Unicode, My preference was the former and I appreciate the huge amount of effort that must have been. But I'd really like to at least see something on the roadmap. Are these the VIs you are talking about? These I've tried. They are good for getting things in and out of labview (e.g files or internet) but no good for display on the UI. For that the ASCII needs to be converted to UCS-2 BE and the Unicode needs remain as it is, ( UTF8 doesn't cater for that). And that must only happen if the ini switch is in otherwise it must be straight UTF8. The beauty of UTF8 is that it's transparent for ASCII, therefore inbuilt LV functions work fine. I use a key as a lookup for the display string, which is ok as long as it is an ASCII string. I can live with that The real problem is that once the ini setting is set (or a control is set to Force Unicode after it is set) it cannot be switched back without exiting labview or recreating the control. So on-the fly switching is only viable if, when it is set, ASCII can be converted. Unless you can think of a better way?
-
Ahhhhh. I see what you are getting at now. The light has flickered on (I have a neon one ) I must admit, I did the usual. Identify the problem and find a quicker way to replicate it (I thought you were banging on about an old "feature" and I knew how to replicate it ). That's why I didn't follow your procedure exactly for the vid (I did the first few times to see what the effect was and thought "Ahhh, that old chestnut"). But having done so I would actually say the class was easier since I didn't even have to have any VI's open So it really is a corner of a corner case Have you raised the Car yet? But it does demonstrate (as you rightly say) a little understood effect.. I've been skirting around it for so long; I'd forgotten it. I didn't understand why it did it (never bothered like with so many nuancies in LV), I only knew it could happen and modified my workflow so it didn't happen to me But in terms of effort defending against it. Well. How often does it happen? I said before, I've never seen it (untrue of course given what I've said before) so is it something to get out nickers in a twist about? A purist would say yes. An accountant would say "how much does it cost"? Is awareness enough (in the same way I'm fidgety about windows indexing and always turn it off?). What is the trade off between detecting that bug and writing lots of defensive code or test cases that may be prone bugs themselves? I think it's the developers call. If it affects both LVOOP and traditional labview then it should be considered a bug or at the very least have a big red banner in the help Still going to use typedefs though
-
You are probably better off logging to a file since you will have a huge dataset.
-
I'm not sure I would agree that OOP is still evolving (some say its a mature methodology). But I would agree LVOOP is probably unfinished. The question is as we are already 10 years behind the others, will it be finished before the next fad Since I think that we are due for another radical change in program designing (akin to text vs graphical was or structured vs OOP). It seems unlikely. As for a plug-in way of invoking AEs. Just dynamically load them. If you make the call something like "Move.Drive Controller" or "Drive Controller.Move" (depending on how you like it), strip the "Move and use if for the action and load your "Drive Controller".vi. But for me, compile time safety is a huge plus for using labview.