Jump to content

Aristos Queue

  • Content Count

  • Joined

  • Last visited

  • Days Won


Aristos Queue last won the day on April 5

Aristos Queue had the most liked content!

Community Reputation



About Aristos Queue

  • Rank
    LV R&D: I write text code so you don't have to.

Profile Information

  • Gender
  • Location
    Austin, TX

LabVIEW Information

  • Version
    LabVIEW 2018
  • Since

Contact Methods

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Does anyone have a picture showing how, given a VI, I can get ... ... the size of the monitor hosting that VI's panel... ... minus the taskbar size (if any) on that monitor... ... in a platform (OS) independent manner? I had a clean-but-annoying way to do this until I discovered that Windows10 has a mode that lets the task bar be replicated onto different monitors (and, worse, those taskbars aren't necessarily the same size). The only trick I can come up with is to maximize the panel momentarily and grab the panel size at that moment, but that creates flicker in the UI. Yes, I am aware of these two properties: They do not suffice. The first one gives all the monitor sizes without accounting for taskbar. The second one only reports taskbar on primary monitor. If I could be sure the taskbar was only ever on the primary monitor, I can figure it out from this info, but discovering taskbars on multiple monitors was an option throws a wrench in that plan.
  2. Nifty trick! I'm impressed with the ingenuity. But two minor caveats... Warning 1: This XControl allows local variables and Value property nodes. LabVIEW prevents the built-in latching Boolean controls from having either a local variable of the latched Boolean (compile time error) or a Value property node (run time error). Those things would mess with latching, and will interfere with this code executing correctly. You should not use either of those things with a latching XControl. Warning 2: This XControl uses an unpublished private method of scripting that is known to have problems with thread synchronization. It was created to handle some very specific editor operations. It is not a safe thing to use generally, which is why it has never been made public, because of inplaceness optimizations that make multiple wires share the same memory. I think the use in this case is safe because there isn't any optimization that I know of that will make LabVIEW avoid the data copy if a downstream node modifies the value. For example: The above image shows the Buffer Allocation dots... and we can see that the Not copies the Bool, even when debugging is turned off. I will ask my team if there are any scenarios where inplaceness will elide that dot. I don't think there can be (because So it seems to be ok here, but I would not advise freely using that method in code generally.
  3. > If a second loop is needed, it could probably be handled through composition instead of inheritance. Agree in general. Composition can get complicated if that second loop needs to be a Timed Loop -- if the timed loop's operation sometimes needs to be reconfigured while it is running, messages may be needed that extend Actor's API, and avoiding that is just more complicated than I think is worth doing. It may be a technical conflict of SRP, but practicality does temper the principles sometimes. There may be additional cases.
  4. Actors in Actor Framework inherit from each other, and if the parent defines a message to be handled, then the children can handle that message. Put another way -- I built an entire framework to handle that exact problem. It doesn't use the Event Structure. The only solution I came up with that I liked that used the Event Structure was an entirely new conception of inheritance, a new fundamental type of VI, and a new editor. It not only solves the problem of event inheritance but also front panel inheritance. I have it all mocked up in PowerPoint to build someday (when LV NXG is mature enough).
  5. This is not really true. I mean, it's kind of true, insofar as LV executes assembly level instructions, not byte code. But it is also misleading. LabVIEW doesn't ever get to a deep call stack. Suppose you have one program where Alpha VI calls Beta VI calls Gamma VI calls Delta VI and a second program which is just Omega VI. Now you run both and record the deepest call stack that any thread other than the UI thread ever achieves. What you'll find is that both programs have the same maximum stack depth. That's because all VIs are compiled into separate "chunks" of code. When a VI starts running, the address of any chunk that doesn't need upstream inputs is put into the execution queue. Then the execution threads start dequeuing and running each chunk. When a thread finishes a chunk, part of that execution will decrement the "fire count" of downstream chunks. When one of those downstream chunk's fire count hits zero, it gets enqueued. The call stack is never deeper than is needed to do "dequeue, call the dequeued address"... about depth 10 (there's some start up functions at the entry point of every exec thread).
  6. A programming language exists in any Turing complete environment. Magic:The Gathering has now published enough cards to become Turing complete. You can watch such a computer be executed by a well-formed program. People might not like programming in any given language. That's fine -- every language has its tradeoffs, and the ones we've chosen for G might not be a given person's cup of tea. But to claim G isn't a language is factually false. G has the facility to express all known models of computation. QED.
  7. This might vary by operating system, but I think you're correct. I have only once had reason to drill that deep into the draw manager layer of LabVIEW's C++ code. But the whole point of deferring updates is to avoid flicker, so it would make sense that LV would aggregate into a single rectangle and render that as a single block... if it tries to do all the small rectangles, that's probably (my educated guess) the same flicker that would've occurred if defer never happened.
  8. A call to a static dispatch VI will always invoke that exact VI. A call to a dynamic dispatch VI may invoke that VI or any VI of the same name of a descendant class. Exactly which VI will be called is decided at call time based on the type of the object that is on the wire going to the dynamic dispatch input terminal. A dynamic dispatch VI is equivalent to a virtual function in C++, C#, or JAVA (and other text languages).
  9. For anyone else interested in how far this bug goes, I got this from one of my coworkers: From my research it affects all processors based on the Zen 2 architecture (AMD processors that started being released in July of last year). AMD claimed that they dropped support in Zen 1 for fma4 instruction set, and the illegal instruction causing the crash is part of that set. The first series Ryzen processors sound like they may still work, but I didn't have access to any to verify one way or the other.
  10. The MKL problem is in the wild? I thought that was something that was only affecting LV 2020 (now in beta) because we were updating to the latest MKL. The bug isn't on my team's plate... I just sit near the people who are handwringing a lot about it. If that's in the wild affecting already shipping MKL versions, then, yeah, that could be it. I still don't know how the node is rolling back to its last known good resolve path... I can't find that path stored anywhere... but if we assume it is binary encoded *somewhere* in the node's saved attributes, then this makes plausible sense. Workaround is to a) get a new CPU or b) wait until LV issues a new version where we do whatever we are doing to avoid calling certain CPU instructions (I'm unclear what the planned solution looks like).
  11. Weirdly, yes, it does make sense. This is what would happen if the DLL was corrupt and couldn't load. The DLL is found on disk, but it could not load. I don't know exactly how the node is getting the build machine's path, but that would be the path it had the last time it resolved successfully, so I'm betting it is encoded in there somewhere and being used as a fallback location. Has someone been rewriting your DLL on disk? Can you try copying that DLL from a clean installation?
  12. Is this desktop LV or target LV? What is the actual path on disk to that Mean.vi?
  13. Is there anything you can think of that has been done special to this machine? Did you get someone from NI to send you a custom patch that has never been generally released? Like, I'm seriously grasping at straws here... we grep'd the binary of Mean.vi -- that path does not appear anywhere in the shipping copy. So somehow you have a copy of Mean.vi that is not the one that is installed by an installer. I would say "it's a fluke" except for that post earlier that someone else had this happen to their LV2017! Now I have a real mystery.
  14. @NeilPate : please run the attached VI on your misbehaving Mean.vi and tell me what path it shows. Read path to analysis lib.vi
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.