-
Posts
1,155 -
Joined
-
Last visited
-
Days Won
102
Content Type
Profiles
Forums
Downloads
Gallery
Posts posted by Neil Pate
-
-
Ah right. Thanks for the clarification.
-
I don't think this is LabVIEW.
We wish LabVIEW allowed you to put things like checkboxes and dropdowns in a table.
-
Just what I wanted, thanks for the inspiration.2022-10-13 20-30-18.mkv
-
Super surprised at this result.
Run the VI and change the colour using the colour picker widget, while the widget is open the top loop which uses a property node works fine (Color Box 2 updates in real-time)!
The bottom loop does not show this same behaviour, the value is only updated to Color Box 4 when you exit the widget.
- 2
-
1 minute ago, Neil Pate said:
Yeah but I want to grab the colour while the widget is open.
Wait, does the underlying colour box update while the widget is open?
-
Yeah but I want to grab the colour while the widget is open.
-
Thanks. It was just a thought.
How would I use Refnum to Pointer on the colour picker widget though? Is it even possible to get a reference to that little popup window?
-
@dadreamerthis sounds like the perfect use case for some low level OS API trickery, like reading the value directly from memory. If only there was someone who was good at stuff like this 😉
-
I think the answer is no, but I ask anyway. Does anyone know if it is possible to access an event or some other thing that returns the colour that is currently being "selected" by the mouse cursor when this colour picker widget is open? I mean getting a new value when the user moves their mouse around while the widget is open and has focus? I suspect this is all deep in C++ land but would love to be wrong here.
-
-
Plot twist... it turns out that the MoveBlock technique is quicker when replacing long-ish rows, but for smaller chunks of pixles the naive replace an element at a time is actually faster.
I just used the worst case scenario in the benchmark, but have realised this is not actually sensible when rendering things made of smaller triangles (and hence smaller line segments). This is what I am rendering, it has close to 20k vertices.
-
1 hour ago, Gribo said:
Hardware acceleration was invented exactly for this use case..
Nope. The use case in this situation is just to learn 🙂
-
22 hours ago, ShaunR said:
You can obviate the array allocation for this particular example by auto-initialising it on the first run (or when the length changes). You can remove the calculation of the length too since that is returned as one of the size parms. You can calculate the length as in the other examples but not calculating it improves jitter immensely. I also did your little trick with the wrapper which makes a nice difference here. too. The following was on LV 2021 x32.
Unfortunately I cannot actually do this optimisation as normally the length of the line, and pixel colour will differ every single call. Well, actually the colour will stay the same for some number of calls, but the line length will normally change.
In my benchmark VI I just used a worst case scenario of a line filling the whole row.
Your MoveBlock2 method still seems faster, thanks! I will see how it affects the performance of my actual application.
-
Thanks everyone, I really appreciate the help. I will take a look over the weekend at the suggestions.
-
2 hours ago, Gribo said:
Are you on Windows? If so, the .NET picturebox control is faster than the native LV control for such operations. It also has other nice features, such as double buffering.
Windows for now, but the point of the exercise is to implement as much as possible from source.
-
9 minutes ago, dadreamer said:
As I wrote here, ArrayMemInfo node was introduced only in 2017 version. It just didn't exist in 2015. That's why it crashes.
After a quick test in 2022 Q3 MoveBlock didn't crash my LV. Gonna get a closer look at the code tomorrow.
Ah, that explains the crash when opening in 2015.
Thanks!
-
Hi everyone,
I am trying to wring as much performance as possible out of a single VI I have created. It is part of a rendering application and will be called many-many times per second. In essence it just draws a horizontal line in a 2D array of pixels.
I have four techniques implemented (3 work, 1 crashes LabVIEW).
Is anyone up for the challenge of trying to get it any faster? I have attached version in 2015 and 2022. I developed in 2022 and back-saved, but for some reason just opening the code in my 2015 LV instance crashes (but it opens fine in 2021).
-
21 hours ago, X___ said:
It's not a general purpose programming language.
Not quite sure what you mean by this.
-
@ShaunR you are missing a bit
-
Sorry to hear about your troubles, this does happen.
One thing worth checking (as this has bit me many times) is dead/broken/old code in Diagram Disable Structures. Symptoms are like you noticed, running in the IDE is fine, but often the builder will just not cope. Often the dead code in a DDS is old and contains invalid enums or old versions of classes etc. Really funky things seem to happen.
-
It seems there was no Discord server for LabVIEW. Well, that has all changed. DM if you are interested in joining. It is open to anyone interested, but I don't want to post the link as I have no idea how to deal with spam/bots!
-
36 minutes ago, Petr Mazůrek said:
The DataStore is shared for the machine which runs multiple IOCs. So Queue can be accessed by name _DATASTORE from other pieces of code.
Sure, but I still don't see the point as you could just keep the variant in the feedback node (and this VI is accessible by all processes) . What does adding it to a single element queue do?
-
I guess there is much more code and it makes sense when looking at the bigger picture, but with this snippet I don't really get this. Why bother with making a single element queue the reference of which your store in the feedback node. Why not just store the variant itself? (using the feedback node?)
-
8 hours ago, Youssef Menjour said:
The idea is to propose not only a simple library on LabVIEW but also to propose an alternative to Keras and Pytorch.
HAIBAL is a powerful modular library and more practical than python. Be able to run any deep learning graph on cuda or Xilinx FPGA platform with no particular syntaxes is really good for Ai users.
We are convinced that doing artificial intelligence with LabVIEW is the way of the future. The graphics language is perfect for this.
The promotional video is not aimed at LabVIEW users. It is for those who want to do deep learning in graphical language. We are addressing engineering students, professors, researchers and R&D departments of companies.
LabVIEW users already know what drag and drop and data flow are. LabVIEW users do not need this kind of video.
What you have made looks great, and I absolutely commend your effort.
But, I think the area of the shaded part of the Venn diagram that consists of people willing to pay for LabVIEW and want to get into Deep Learning using a new and paid for toolkit is a number approaching machine epsilon. NI has abandoned Academic institutions and is introducing a new licensing model that is predatory and harmful to the long term viability of LabVIEW. It breaks my heart...
who knows what this is for , help ?
in LabVIEW General
Posted · Edited by Neil Pate
Well, given that this is some kind of (bad) timing logic, and 59 is suspiciously like the number of seconds in a minute, I would suspect this is some kind of check to see if the elapsed time is > 1 minute.
I strongly suspect this code is overcomplicated and there are signficantly simpler ways of doing things.
These four sections of parallel code below look like they constantly evaluate if the elapsed time is greater than 45, 30, 15 and 59 seconds. The shift register on the boolean essentially latches (i.e. remembers) the previous value, and I think the XOR is used to reset the latch. I am not sure what is in the other cases of the Case Structures. Strangely the top one is active on a False, whereas the others are on True.