Jump to content

Neil Pate

Members
  • Posts

    1,155
  • Joined

  • Last visited

  • Days Won

    102

Posts posted by Neil Pate

  1. Well, given that this is some kind of (bad) timing logic, and 59 is suspiciously like the number of seconds in a minute, I would suspect this is some kind of check to see if the elapsed time is > 1 minute.

    I strongly suspect this code is overcomplicated and there are signficantly simpler ways of doing things.

    These four sections of parallel code below look like they constantly evaluate if the elapsed time is greater than 45, 30, 15 and 59 seconds. The shift register on the boolean essentially latches (i.e. remembers) the previous value, and I think the XOR is used to reset the latch. I am not sure what is in the other cases of the Case Structures. Strangely the top one is active on a False, whereas the others are on True. 

    image.png.93865b4e3121927c8f2ef0d62a66e78e.png

     

  2. I think the answer is no, but I ask anyway. Does anyone know if it is possible to access an event or some other thing that returns the colour that is currently being "selected" by the mouse cursor when this colour picker widget is open? I mean getting a new value when the user moves their mouse around while the widget is open and has focus? I suspect this is all deep in C++ land but would love to be wrong here.

    image.png.b285c00761016c732398ff9dcb39e605.png

  3. Plot twist... it turns out that the MoveBlock technique is quicker when replacing long-ish rows, but for smaller chunks of pixles the naive replace an element at a time is actually faster.

    I just used the worst case scenario in the benchmark, but have realised this is not actually sensible when rendering things made of smaller triangles (and hence smaller line segments). This is what I am rendering, it has close to 20k vertices.

    image.png.f2c40fa2d1f4a32656cf104c377ca114.png

  4. 22 hours ago, ShaunR said:

    You can obviate the array allocation for this particular example by auto-initialising it on the first run (or when the length changes). You can remove the calculation of the length too since that is returned as one of the size parms. You can calculate the length as in the other examples but not calculating it improves jitter immensely. I also did your little trick with the wrapper which makes a nice difference here. too. The following was on LV 2021 x32.

    Unfortunately I cannot actually do this optimisation as normally the length of the line, and pixel colour will differ every single call. Well, actually the colour will stay the same for some number of calls, but the line length will normally change.

    In my benchmark VI I just used a worst case scenario of a line filling the whole row.

    Your MoveBlock2 method still seems faster, thanks! I will see how it affects the performance of my actual application.

  5. Hi everyone,

    I am trying to wring as much performance as possible out of a single VI I have created. It is part of a rendering application and will be called many-many times per second. In essence it just draws a horizontal line in a 2D array of pixels.

    I have four techniques implemented (3 work, 1 crashes LabVIEW).

    Is anyone up for the challenge of trying to get it any faster? I have attached version in 2015 and 2022. I developed in 2022 and back-saved, but for some reason just opening the code in my 2015 LV instance crashes (but it opens fine in 2021).

    image.png.52f7aa30dd34a5c8b277252755c97421.png

     

    image.png.c5a6ddafe539da3b16e37f38b74a285e.png

    Raster 2022Q3.zip Raster 2015.zip

  6. Sorry to hear about your troubles, this does happen.

    One thing worth checking (as this has bit me many times) is dead/broken/old code in Diagram Disable Structures. Symptoms are like you noticed, running in the IDE is fine, but often the builder will just not cope. Often the dead code in a DDS is old and contains invalid enums or old versions of classes etc. Really funky things seem to happen.

  7. 36 minutes ago, Petr Mazůrek said:

    The DataStore is shared for the machine which runs multiple IOCs. So Queue can be accessed by name _DATASTORE from other pieces of code.

    Sure, but I still don't see the point as you could just keep the variant in the feedback node (and this VI is accessible by all processes) . What does adding it to a single element queue do?

  8. I guess there is much more code and it makes sense when looking at the bigger picture, but with this snippet I don't really get this. Why bother with making a single element queue the reference of which your store in the feedback node. Why not just store the variant itself? (using the feedback node?)

    image.png.cf0ba11717792e702f3ca6b7ec8ba5ea.png

  9. 8 hours ago, Youssef Menjour said:

    The idea is to propose not only a simple library on LabVIEW but also to propose an alternative to Keras and Pytorch.

    HAIBAL is a powerful modular library and more practical than python.  Be able to run any deep learning graph on cuda or Xilinx FPGA platform with no particular syntaxes is really good for Ai users. 

    We are convinced that doing artificial intelligence with LabVIEW is the way of the future. The graphics language is perfect for this.

    The promotional video is not aimed at LabVIEW users. It is for those who want to do deep learning in graphical language. We are addressing engineering students, professors, researchers and R&D departments of companies.

    LabVIEW users already know what drag and drop and data flow are. LabVIEW users do not need this kind of video.   

     

     

    What you have made looks great, and I absolutely commend your effort.

    But, I think the area of the shaded part of the Venn diagram that consists of people willing to pay for LabVIEW and want to get into Deep Learning using a new and paid for toolkit is a number approaching machine epsilon. NI has abandoned Academic institutions and is introducing a new licensing model that is predatory and harmful to the long term viability of LabVIEW. It breaks my heart...

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.