Jump to content

X___

Members
  • Posts

    421
  • Joined

  • Days Won

    28

Everything posted by X___

  1. I am not sure whether this has been discussed before, but I stumbled upon a simple (alternative) trick to debug parallelized loops that may be of interests to others. Of course there is always the option to edit the parallelism settings and check the "Allow Debugging" box. The "P" terminal is slightly grayed out and parallelism is turned off until the user changes the settings back to not "Allow Debugging". I find it easy to forget or even fail to notice that a parallelized loop is in "Debug" mode due to the very subtle cue that parallelism is turned off. Generally, it is the degraded performance of the code that will trigger a question and then once is off to a rabbit chase for the culprit... A potentially useful trick is to do what is illustrated below: The code inside the loop is encompassed by a disabled structure (set to "Enabled") and the degree of parallelism set to 1. Note that it is not necessary to edit the Loop Parallelism settings! I find the visual cues much easier to notice (and the code easy to revert to full parallelism). Note that it is possible to use a different P value, but in that case you will miss the majority of loops and the actual loop that will be debugged appears to be somewhat random (I have some idea about which ones are, but this is pure guess). This might fit your needs. Once debugged, it is pretty obvious that the parallel loop is not truly "parallel" anymore, and therefore it is hard to forget to revert to a fully parallelized state. HTH.
  2. Another fun bug of this function is that it arbitrarily cuts off generation at 10,000: Why not? It always spices up debugging... (the revision history shows: rev. 2 Sun, Sep 25, 1994 12:46:16 PM greggf)
  3. A somewhat related bug in this function is that for certain parameters, it will fail to compute a random variable and instead of returning an error, will return "NaN" for the values and an error code of 0. So not only do you have to use a custom code handling snippet (no error cluster output as those things date back from Colonel Kodosky times), but also check for NaN outputs. In any case, NaN output is clearly not a valid one, because it is due to a failure of finding a solution to CDF(x) = cte in this subVI: where the unconnected error from the subVI is clearly non zero when there is a failure to find a solution (positive/negative bracket for the root): Since I am not planning to go past LabVIEW 2021, I can fix that in my repo, but this is a remnant of olden times which will keep biting the likes of SpaceX, Blue Origin and others, who are planning to send human to Mars (hopefully this nonsense will stop earlier rather than later).
  4. The Gamma Random generator has two parameter inputs in addition to the number of sample: b and c According to the help, which I paste here for the record: b is the scale parameter and c the shape parameter of the Gamma distribution. However, if you do that, the generated RV are bogus, because it turns out that b is the shape parameter and c is the scale parameter, as defined for instance on Wikipedia: This can be easily verified by generating a large number of RV and comparing their normalized histogram with the functional form above. Tested in LabVIEW 2021 SP1f3 on Windows 10 64 bit
  5. Interesting suggestion. Any idea how the multicore option works (Get/Set Number of Threads)? When I test this on a standalone VI, I get a number of 8 for "Transform" (which is what I am interested in testing) or "Linear Algebra" (I am using a 8 cores/16 threads laptop). When I include this as part of a larger project (launching a number of parallel tasks some of which use parallel loops), I end up with a "1" (in both tasks). I am not sure I should necessarily expect multithreading to be particularly useful in my case (rather small array sizes) but there isn't any explanation that I know of as to how this is supposed to work or be understood...
  6. Actually that makes more sense: get an actually debuggable Python code and wrap it into a Python Node call. Un-commented spaghetti code is pretty much useless... The flat (single VI) LabVIEW generated code demo doesn't tell anything about the ability for this type of AI assistant to generate structured code. The first step is for NI to demonstrate that they can use AI to clean up (or expand/shrink) diagrams meaningfully. I believe the way they work is they are predicting each character on the fly I hope not... at least if they are using the thing right.
  7. It's heavily scripted, but on face value, that looks promising.
  8. BT What's wrong with the Picture control BTW? As long as you are not looking for super-fast updates (athough even that is not precluded), this is a very capable object...
  9. Actually, here is the relevant info: Which VIs are Installed with NI-IMAQ and Vision Acquisition Software? and this: Is a License Needed to Work With NI Vision Acquisition Software? So it looks like nothing additional is needed for the computer on which the IMAQ hardware is installed (or from which it is run), but officially, any other computer (including a development computer) would. If your client is not using any IMAQ hardware, they are not supposed to use code (development or runtime) that uses these VIs. But again, I may have this all wrong, although that is what I remember from the time we were using an IMAQ frame grabber.
  10. The latest NI License Manager release page: https://www.ni.com/en-us/support/documentation/release-notes/product.license-manager.html will clarify all that:
  11. It doesn't say much of what the expected position reshuffling is going to be, but based on past history, it it unlikely they will not change anything to the management and strategy. I don't anticipate a reboot of LabVIEW or doubling down on investment and renewal of hopes for the tool though.
  12. Here is an example that shows how that works: https://forums.ni.com/t5/LabVIEW/Can-I-read-16-bit-PNG-files-with-LabVIEW/m-p/4296823#M1255119
  13. You would have to add a dummy graph that you would only show the legend of and fill that with half of your plot names (e.g. adding one empty plot for each pair). And of course hide the legend of your original graph. And respond to actions on the dummy graph to update your original graph.
  14. There is NO public bug tracker in LabVIEW... In fact, I am not even sure there is beta program anymore...
  15. A bit like Egyptian hieroglyphs.
  16. For NI's defense, this is probably only visible on 1080p displays. It speaks volume about the state of LabVIEW when this becomes a new feature worth discussion on LAVA...
  17. OK, my bad. I think I know what you are referring to: I am not sure what this means though, as I see no effect of the default font style on terminal, bundle, etc. sizes, only on the labels (in LV 2021 SP1).
  18. Care to expand? Don't see anything in the changelog (sorry, there is no changelog).
  19. https://seekingalpha.com/news/3949487-national-instruments-sale-process-likely-to-be-completed-in-early-april-report
  20. I am looking forward to reading "Undo doesn't work anymore" in 2024Q3 and "colors are all gone" in 2027Q5... 🍿🍹
  21. X___

    U32 or SGL?

    Almost got me. I was certain it was blue.
  22. X___

    U32 or SGL?

    For info, this showed up in a VIM, in which I was playing back and forth between the representation of the input data to check different cases. I guess it throws off the IDE context help... and maybe more.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.