-
Posts
4,883 -
Joined
-
Days Won
296
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
Application licensing with dongles
ShaunR replied to gyc's topic in Application Builder, Installers and code distribution
Wow. For once LabVIEW didn't crash when saving to a previous version. I flirted with it a while back (its on here somewhere). The issue is Admin rights. I never got around to revisiting it. -
Perhaps we should design a BS Bingo game for LabVIEW
-
I don't know if it is there or not. I don't visit the NI sites much any more since it kept asking me to verify my details-so I don't bother now. But we have over a year to argue about it before it is likely to be considered Well. We are about to argue about semantics. I didn't say what it should do, just that I find it irritating that it doesn't and pointing out that if the OP uses it, it probably won't do what they (and I) expect for those controls.
-
Note that this invoke node will not clear tables or combo-boxes which I've always found infuriating
-
Well said. I personally think people get hung up on the terminology. I view the NI certifications as expanding levels of remit that closely follow a typical career path as you take greater responsibility over more of the project life-cycle. You start off with a general knowledge about LV (CLAD-demonstrate you know what it is) then progress to detailed implementation (CLD-demonstrate you can write software) and for the CLA it expands into requirements decomposition. (that's where I see the grey area between technical and project management begins) If you demonstrate competency in all these areas then you could also call yourself a software systems engineer, senior software engineer, chief developer, software technical authority et al. "Architect" has always seemed a bit, pretentious to me. At this level, titles don't really mean much (I notice you don't call yourself an architect ). I'm pretty sure that next time round, Daklu will probably get one of the highest scores ever in the tests now that it is clear what is expected. For most of us, examinations are rare occurrences and we may perform better under pressure in a management meeting (that we deal with often) rather than in a timed test..
-
Well. I'm coming in a bit late but I'm a bit surprised that there's no discussion of tagged-token/static dataflow or even actors and graphs. Yet this whole thread is about dataflow ? I don't know specifically because I have never given it much thought and only have a cursory understanding since it's not my remit. But if I were to catagorise labvierw I would say it is either a tagged-token dataflow model or a variant thereof (hybrid?). Asking whether it is "pure" or not is a bit like asking if the observer design pattern is a "pure" design pattern.
-
Not a consolation I know, but you're not the first. And definitely won't be the last. Of all the NI exams. The CLA is the one where exam technique weighs heavily in the results. You can nail the requirements and documentation by spending 1/2 an hour writing a well rehearsed scripting VI and cutting and pasting the requirements into a text file As Crelf said. "Give the examiner what they want" (or in this case, the Requirements Gateway).
-
Hmmm. Not sure I necessarily agree with this. M$ has been touting CE as "hard real-time performance" since version 3.0 and the term "real-time" on it's own tends to mean many different things to different people. But here's a report into it's real-time capabilities and a rather excellent web-cast on its real-time features so you can decide by your criteria. When it boils down to it. There really aren't that many "real-time" applications that are required of an OS in Labview development. The trend is to offload real-time onto dedicated hardware and NI have many fantastic PCI or PCIE solutions for this. Thats why we have seen the proliferation of FPGA. This means tat Embedded Windows is an excellent platform for real-time (or near real-time) applications. I've actually been using Windows XP Embedded Standard for a couple of years now (not exactly the same as CE). Far cheaper than NI platform products and you can use NIs cards as well as other manufacturers and is therefore far more versatile. Once you get the right initial "image" - Labview runs on it too. We have gone down the route of having small embedded machines for specific task (e.g one for image acquisition, one for motion control, one for acquisition etc) and as replacements (or alternatives) for cRIO. So far, we have not found one instance where it hasn't been as capable. But many instances here it is far more versatile and at a fraction of the cost. Looking forward to evaluating Embedded 7. But XP has worked so well (with Labview) that it'll probably only happen when we really have to.,
-
http://youtu.be/Vh78T--ZUxY
-
Yup. well worth a rofl point. I have seen quite a few of those VIs.
-
Sweet. I'm pleased it is worthy of consideration.
-
You made that up What you meant was LTWAC (Lazy To Write A Class)
-
Extremely Long Load Time with LVOOP, SuPanels, and VI Templates
ShaunR replied to lvb's topic in Object-Oriented Programming
Red: I think this is where we are at odds. I think it matters very, very much. The loading of the VIs in a class is orders of magnitude slower than instantiating an object. One is a disk operation and the other is just allocating memory. The big caveat with classes is that a class doesn't just load what it needs (as it does in classic LabVIEW). It has to load everything in the class - needed or not. Pre-loading these method VIs is the only practical solution to trade the 1 or 2 seconds (say) of disk activity for a few microseconds/milliseconds of execution and repeatability/consistency of calling those methods. If, however, it is loaded just before it is required. If may be that that 1 or 2 seconds for LabVIEW to load the class from disk is in the middle of a 100 ms timeout, or a timed measurement. Then it is a real problem. Blue:Exactly. But the way the DD table works is that it is a list of VI references in the order of of the hierarchy (parent at the head, child appended to end). If we want to prevent the second scenario as described above from happening. then it will have to load the parent AND the children from disk before the call. If it is an overridden parent. then it will have to load the parent, child and all the siblings. Yes. Absolutely. It returns with a reference. To have that reference it must have loaded that VI. As it cannot load a single VI from a class. It must have loaded the entire class If it didn't know before-hand what class it would need, it must have loaded all of them. Unless it lazy loaded the class when the call was made. This is one way that I think about what the "mechanics" might be (in the design environment in this case). There are a few ways that the blue box area could operate (they could be sequential, depth first et al.), but it's basically how I see the partitioning between loading. and calling in reference to the document. From this I think you can see that I believe (I'm only guessing after all) that all classes are loaded before any calls - loaded, not instantiated. I think instantiation is just semantics to keep POOPers happy since I could say that by placing a common or garden VI on a diagram I am loading and at the same time instantiating that VI. In In this case it's just a parent and child. I think that if you override, then siblings would also be loaded. In the image, your "dynamic dispatcher" is the index array element primitive. -
I've seen it a few times too. Fairly random "feature" Just need to up the PHP memory limit a bit (I bet it's set to 32MB)
-
Did they give you a CAR number for it or was there already one registered.?
-
I do sometimes use them to post to LAVA because creating a snippet is less hassle than uploading a VI. But as I use google chrome, they are just not worth the bother to use generally (open image in a new window, resize the browser, drag to desktop, find a diagram, drag into vi from desktop, restore browser). If a snippet is posted on LAVA it has to be something pretty special for me to try and get it in a vi. I normally just pass them by.
-
Extremely Long Load Time with LVOOP, SuPanels, and VI Templates
ShaunR replied to lvb's topic in Object-Oriented Programming
Indeed. But to call a child it must be laded at some point either prior to the actual call (pre-loaded) or when the call is executed (lazy loaded). Like you said. We don't have visibility of the mechanics. But from the quoted piece of text, it seems that the purpose of the dynamic dispatch table is to maintain a VI reference list. and the object passed determines from which VI reference the method is executed. I am speculating (in response to the OPs observations) that to get those references it must load the VIs. From that, I am suggesting, as it doesn't know which classes "will" be called until it actually happens. The algorithm pre-loads all classes than "can" be called to populate the table. The alternative (I cannot see any others, but there may be one) is you place an object on the wire that indexes past the end of the list triggering some loading voodoo when the call takes place. As you say, you cannot load a part of a class, so the latter could be a long wait as it loads every method for the new class (and dependencies) - right in the middle of your code executing. The former would explain the OPs obserevation. The latter would be a very good reason not to use DD-especially on real-time systems. -
Web UI Builder - feedback wanted
ShaunR replied to Mr Mike's topic in Remote Control, Monitoring and the Internet
Yup. I think also that the explosion of Ajax enabling "desktop style" dynamic web pages that run in the browsers native engines has put the nail in the coffin of external run-times like Silverlight. Things like flash Java etc are really now only used as presentation tools rather than dynamic page content. Try and find a site now that doesn't have jquery for example. Sure it's not graphical. But JavaScript developers are the new C developers;10-a-penny and cheap.There are many data streaming sites now using JS push servers for things like real-time stock quotes. In terms of graphical presentation (like graphs etc). I'm looking at Google maps. Not so much in using it as-is. But the technology. Javascript based with bi-directional feedback allowing zooming, panning, and overlays. All that a web-page requires is the DIV tag and a link to the javascript and the browser does the rest. Now. If there were a tool (like the web-builder) that created javascript (like the embedded toolkits create c). I would be getting very excited. I could then use it to create front ends for my Labview websockets. Have to be free of course. -
I'm pretty much in-line with Cat here. I think "new" business is driven by latest features, gadgets and spin. But thats due to competition from other products. But I would tentatively wager that the vast majority of LabVIEW income is from SSP which, as it's name suggests is for value added Service and Support - a recurring income stream with almost no additional overheads or cost. I think most "real-world" Labview programmers (if they have any sense). Only upgrade to a later version if there is new feature that achieves common requirements that were impossible or at least very difficult in the older versions (very rare-but I would put the events structure in this category). Or a customer requires it. Version changes are a very high risk proposition and I think most opt for "better the devil you know". But that doesn't stop us paying our SSP for the exemplary service NI provides and to be able to pick up the phone and talk to real engineers who have real product knowledge. I, for one, pay my SSP for this. Not to get the new versions. If I were to rate the importance (to me) of new features and reliability on a scale of 1-10. New features would be 2-3 and reliability would be 8-10. New features don't cost me time, money or reputation. Unreliable software does. Nor do I see them as mutually exclusive. Get the software reliable then add features. Adding new features to unreliable software only makes unreliable new features. In times gone by, I used to be more cavalier about upgrades because a new version would come out, with very few known issues of any consequence and if something was found, a patch or new version would be very quick to follow-certainly within a project time-frame. Now, however, the cycle is very slow and I've yet to see a SP2 fixing my 2009 version before 2010 was released (which probably hasn't fixed bugs I'm concerned with but as sure as eggs are eggs will introduce new ones).
-
Extremely Long Load Time with LVOOP, SuPanels, and VI Templates
ShaunR replied to lvb's topic in Object-Oriented Programming
"Each object traveling on the wire carries a pointer to its class information (refer to the "What is the memory layout of a class?" section earlier in this document). That class information includes the "dynamic dispatch table," which is a table of VI references. Each class copies its parent’s table exactly. It then replaces the VI reference for any parent function with its own overriding VIs" I'm thinking in the specific case of overrides. From what little I understand of dynamic dispatch in general cases. It seems no different from a polymorphic VI i.e it chooses the implementation at edit time). But for overrides (which Is what I perceive to enable run-time polymorphism) The statement above seems to suggest that the class VIs are already in memory (For the table to have the VI reference it must, I'm assuming, have already loaded the VI's in the class with an equivalent of "VI Open reference" and resolved any dependencies). If it is, as I think you are suggesting, that it lazy loads from disk with a sort of "load on first call" then that would (whilst maybe convenient for plug-ins) have a dire performance impact in many cases. Maybe the "lazy-load" is the fall-back mechanism if other methods are unable to identify the call chain to populate the table (I think the project/class manager is doing a little bit more than we might think). I suspect (but don't categorically know) that all the class VIs are actually loaded in the case of overrides and, unlike in the general case where it is known before-hand what VI's will be used;;all VIs that override are loaded just in case it may be used at run-time. I suspect also that you can't just load a part of a class (i.e just the VI's used in overriding) so that the entire class must be loaded. This view would seem to fit with what the OP is seeing hence my question about the use of overrides. -
Extremely Long Load Time with LVOOP, SuPanels, and VI Templates
ShaunR replied to lvb's topic in Object-Oriented Programming
Requiring to package in a new format in a later version just to get normal performance is insane. Apart from that 2010 is 3x slower at loading sub-panels than 2009. I would expect a whole application of thousands of VIs to take less than 45-60 seconds to load (unless you are loading across a network) and that's if they are not in memory to begin with Do your accessors override the base class by any chance? I believe dynamic despatch (required for overrides) might mean that depending on how you are calling the methods, all classes may have to be loaded even if they are not actually used since it is not known at compile - time which child will be used to override the base class(dynamic dispatch is just a VI selector after all and I doubt it loads from disk when you first call a particular method). Perhaps AQ can shed more light on that.though. There is some interesting reading here under The Design of Class Methods. It could also be something to do with the fact you are using VITs and creating a new instance which needs to be cloned (can't think of a good reason why though). I would try what Daklu was suggesting and convert the VITs to VI's (re-entrant or not) just as a test to see if it improves. In theory, if the application has loaded the classes at start-up, then they should already be in memory so there should be no reason to re-load from disk so the only overhead should be the VIT itself (which should be quick). -
Sounds similar to an issue I had once. The provider changed the server I was hosted on and required me to update my NS records to the new server. I added all the new ones but I missed deleting one of the old ones (although I could swear blind I got them all). The result was intermittent DNS lookups because there was a 1 in 3 chance the lookup was done through the old one..
-
The background colour property node accepts RGB and Alpha values between 0 and 1 (i.e 1/255).