Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by Yair

  1. I don't disagree that there wasn't some editing, but there is a lot you can do to match those pictures with just standard LabVIEW.


    Yes. Since it looks like this is directly copied from LV, it's no surprise that you can create something very similar in LV and it's quite likely that their image started as code in LV. My point was simply that it looks like they didn't use LV itself, but rather created their own visuals, where they did whatever they wanted to, because their only constraint is for the visual to look good on screen for a second. Do you really not see all the differences which make it clear this is not LV (things like case dropdowns on the left side, tunnels which are too big, diagonal wires and several others)?

  2. It looks like they took LabVIEW code and did a zoom desktop to make the code look bigger for the silver screen.


    No. Again, it looks like they took some LV code and then created a completely custom image which looks similar, but isn't actually LV. I agree that the signal split and merge do look exactly the same, but there are many things which are clearly not LV, the IDE not being the least. The "Z transform VI" actually appears to be their version of Build Waveform.

  3. Sounds interesting, but I don't think the functionality is clear enough (things like "Is this for live data? Historical data? What does the web app look like? What does the phone app look like?", etc.). I would suggest creating some videos and examples and maybe also adding a public test account which people can connect to (at least for reading, if you don't want someone flooding your server with useless data) to see how the system works.

  4. The OS there also seems fake (although I'm not familiar with all of the Linux desktops, so it could be one of those) and it's very blurry, but I think the title of the windows is DataFlow 5.3.4 (the name is almost entirely from context and the digits could be pretty much anything).


    I wouldn't be surprised if you something that looked very similar to Merge Signals. As far as I can tell, that graphic is directly inspired by LV.

  5. The new code is also cleaner than the original one...  :cool:


    Yeah, that tends to happen.


    For next time, current OSs perform automatic backups of their own. On Windows, you can access previous versions of a file by right clicking it in Explorer. You can probably also use this to restore your LabVIEW.ini file.

  6. Yeah sure it is usefull and fast code. But I think the problem is this "easy to use" feature is not all the time compatible with complex systems.


    So you're basically saying "complexity is complex". LV does a fairly good job of allowing you to write parallel code which at least doesn't crash and has all the safety mutexes. It doesn't guarantee that every piece of code you write will function correctly. For that, you have to work within the rules of the system and sometimes those rules have odd corners. Sometimes this is by design ("that's the best we can do"), sometimes by accident ("oops, we didn't think of that" or "there are two separate rule systems and now that they interact they produce a weird result").



    I'm aware, but Labview doesn't allow any other possibility if I use POO & Dynamic Dispatch. So I consider both incompatibles.


    I'm not sure if it was strictly necessary to forbid preac clones for DD. Maybe it would have been possible to say "each class will get its own instance of the clone at run time". Maybe NI didn't do it because they considered it to be too confusing or because it would have been too much work or because they didn't have time before release or because they thought that the need for it would be minimal or because there are bugs in there which I didn't think of. The bottom line is that they made their choice and the result is that you can't correctly use static preac VIs which hold state in DD VIs which don't run over a long time. In that sense, yes, they are incompatible.

  7. Citaton given. Can we please move on now?


    Sure. All that tells me is that the definition of reentrant in LV is different from what's used in that context, so it's even less relevant than I thought (and to be clear, I don't think it's particularly relevant, because what's important is functionality, not original intent or terminology. You are those who insist that "reentrancy is for concurrency", which is a word which in that article is only mentioned once, to demonstrate a race condition with global data, which doesn't apply to local state in any way).

  8. reentrancy is made for parallel execution not storage.


    Again, citation needed. If you're going to claim a certain feature was designed for purpose X, show it. Like I said, even if you do show it, that doesn't negate the usefulness of storing state.


    There are many many cases of where state is useful in a reentrant VI. For example, a value changed? VI is useful - you feed it a value and if it changed from the last run, it will output true. Any kind of actor requires state to be stored somewhere and actors are often reentrant, because you can multiple actors of the same type. For those applications, you want state, which means preallocation. That's my primary use case for reentrancy. The VIs may execute in parallel (actors/daemons certainly tend to, since they're long running), but they also need state.


    If you do this, you do need to be aware of how preallocated clones are actually allocated, because otherwise, you can get shared instances where you expected to have separate ones, as I mentioned before.

  9. Sure. Obtain returns the actual queue reference rather than a copy.


    I don't care so much about the separate references (I assume that while you have no desire to leak four bytes, that's not the major concern), but I do want LV to nicely and automatically manage the memory. I expect that the most likely action that NI would take in this area would be to relax the "owning hierarchy" rule and to add internal refcounting so that resources are not released when their creating hierarchy goes idle, but when the last hierarchy using them goes idle. I'm not sure if this will or will not help with your dissatisfaction here. It would mean you don't have to worry about it, but it probably means there are more chances for stuff to stay allocated because LV will now require more things to happen before it destroys them automatically.

  10. Sure. Obtain returns the actual queue reference rather than a copy.


    Like I said, I rarely use queues by name, but that will certainly break the existing behavior where LV can keep the queue alive in different hierarchies because it has separate references. Doesn't sound like progress to me. If all you care about is cleaning up, why not just use the force destroy input?

  11.  Re-entrancy is meant for non-blocking parallel execution.


    a) Citation needed.

    b) I don't care even if it is. In my code I relatively rarely need parallel execution and more commonly want copies which will maintain state (some dynamic and some static). Classic preallocate reentrancy does that (with some exceptions). Both use cases are valid.



    It is exactly the same as malloc for memory leaks. Obtaining a queue reference creates a copy of the ref so the programmer is forced to reference count and twin each obtain with a release.


    That's if you do obtain by name. I usually don't. Even so, the API does give you the option of force destroying the queue. In any case, you know that LV can't magically release the memory because it has no way of knowing when you're done with it unless you tell it. The exception is when the creating hierarchy goes idle, but people usually only notice that when it causes their code not to work because it did release something they think is still active.


    If you have a suggestion for how LV can otherwise manage this memory, I'd be interested in hearing it.

  12. No one is forcing you to use all these newfangled features like queues or VI refnums. You could stick to simple globals, or you could use LV 3, if you manage to load it :throwpc: .  I know I use shared reentrancy relatively rarely (mainly because if I use reentrancy, I usually want state).


    I'm not saying the issue I mentioned is the same. I meant that shared reentrancy is not unique or first in requiring you to understand certain details about the way LV code functions under specific conditions. I'm sure that like me, you learned some of these details by writing and running code which then failed because "oh, I didn't realize that X".


    Is it possible to leak memory? Sure, create a queue, push a bunch of data into it and then ignore it and keep the hierarchy running. I don't see any way having queues would not allow you to create memory leaks of this type. LV does have its rules for when it will release the memory, so I wouldn't consider it a real memory leak (it's not like calling malloc in C and then ignoring the pointer). Whether LV actually releases the memory or hangs onto it like someone with abandonment issues is another story.

  13. There used to be a time when LabVIEW programmers didn't have to worry about memory leaks and thread safety-that was an "other languages" affliction.[queue wavy, flashback sequence]


    I'm not sure what you think changed. With LV you still don't have to worry about memory leaks (unless there's a bug in LV or you allocate something and don't release it, but that's nothing new), nor do you have to worry about thread safety any more than you had to before. LV still guarantees safe reading and writing of data, but if you create race conditions, that's a bug in your code. Surely you're not suggesting that race conditions didn't exist before the addition of this reentrancy mode around 8.5...


    And yes, it wasn't that hard to get similar issues with reentrancy in the past (for instance, by calling a reentrant VI in a loop, resulting in all iterations calling the same instance, because preallocation goes by diagram location).

  14. For the second call of the VI to generate an event that essentially originates from actions taken on the VI from the first call (or perhaps worse, an event from the first call being handled in the second call) seems to violate some form of encapsulation, but I can't quite pin down the right terminology.


    I'm not sure whether I agree or not, but as PiDi pointed out, the behavior is at least consistent - the registration for a static event happens when the VI enters run mode (not when it's actually running). From that point on it will enqueue all events until it goes idle. This explains why it remembers the event from the last run - the VI is still in run mode. I personally also had this issue with certain users who would double click an OK button (users. :angry: Am I right?). They would sometimes be fast enough for LV to register this as two value change events and the next time around the dialog would be dismissed immediately.


    Anyway, another way to solve this is with dynamic registration - register when the VI starts and unregister when it ends. This is more of a PITA, but it does work.

  15. I prefer the small ones I can look at for a few seconds and actually understand what's going on without having to read the paragraph of explanations. But I guess the world of programming languages has long ago exhausted the pool of simple and original WTFs.


    You could try these - http://forums.ni.com/t5/BreakPoint/Rube-Goldberg-Code/m-p/399999#U399999


    They're not intended to be funny, but to be fair, most of the TDWTF's snippets aren't funny either, and the articles often tend to have a lot of hot air blown into them to make full article out of a situation which could be described in four sentences, so you're not any worse off.

  16. I'm still waiting for the first LabVIEW codesod


    You mean like this - http://thedailywtf.com/articles/Labview-Spaghetti ?


    You can read the comments there to see why I really appreciate the fact that the online LV communities are populated by adults. At some point in that comment thread I mentioned that I use LV because it's my preference and one of the normal people there appeared genuinely surprised that someone would actually say something like that instead of arguing like children.



    Here are some other mentions:



  17. ... because although I can imagine NI will have some products at some point that follow this model, but I do not expect LabVIEW to go that way.


    NI already has this - the LV web UI builder is one example. FPGA compile servers are another (although that's essentially just processing). I think this model has its place, particularly in providing some muscle to web apps, which I assume is part of the intention of Azure.


    I certainly agree that I wouldn't want to see LV moving in that direction, and I don't think it's very likely either, but I would like it if LV did adopt one feature from this model - frequent updates. It would be nice if instead of having to install incompatible LV versions once or twice a year to get features and bugfixes I could just apply auto-updating patches which would be released at much smaller intervals and so only need to actually install a new version every 3-5 years or so.

    • Like 1

  18. Microsoft has launched a new website. At http://how-old.net/# , you can upload a photo and the site tells you how old the person in the image is. I've been playing with various photos. The thing is extremely accurate as far as I can tell. I'm impressed with how far facial recognition has come.


    They have a link there now describing the background, which may not have been there originally. Essentially, this was a demo meant to show the face-rec functions in Azure, and I'm guessing the age part was just a nice gimmick they added which uses machine learning, not something actually designed to be super accurate. The basic takeaway is that you can use the APIs yourself for doing similar things (although I never looked at the technical side of Azure, so I don't know what the requirements for it are and if it's callable from LV).

  19. I just realized that the panel's conversion method that I mentioned is actually more useful than I thought. You can use it once on first call to calc the window border (convert the top left point to the screen coords and subtract the window origin from it) and then cache that (LV has global data methods for caching) and use it for the BD window, because it should be the same numbers.

    That should be pure LV and should ideally work every time.

    P.S. You can probably do this calc on a temporary VI off screen or on any open VI, as long as it shows the window parts.

  20. I think the solution depends on exactly when you want to do this.


    The obvious answer is to get the BD window properties and work from there, but this has the equally obvious problem of figuring out the size of the top section of the window. This might be workable if you can add a step somewhere in the process where you do this once (and would need to redo it if something changed).


    A less obvious answer is that the panel has a method for this conversion. The BD doesn't have a parallel method, but since the top section appears to be the same size on both, it should be possible to place the FP window in the same position as the BD window and then use that method and return the FP window to its original spot. Probably as ugly as it sounds, but I expect it should work. Again, it probably depends on when you want to do this. Maybe this will be not as ugly if you create a temporary VI to do this and move it to the bottom of the window stack.

    • Like 1
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.