Jump to content

Wouter

Members
  • Posts

    129
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Wouter

  1. 23 hours ago, LogMAN said:

    In the article it says: "The ability to create modes and overloads isn’t publicly available in the newest version of LabVIEW NXG; however, we’re looking to extend these features to our user base soon."

    Thanks! Totally missed that.

     

    19 hours ago, hooovahh said:

    Currently NI's package manager is less suited to distributing LabVIEW reuse in NXG, when compared to VIPM in LabVIEW 20xx.  This is partially because in NIPM the package must be built for a specific version of NXG.  So a package made for version 3.0 can't be installed in version 5.0, until the creator of the package updates it and rebuilds it for version 5.0.  For reuse to be more adopted in NXG, this limitation needs to be removed.

    So who is currently the creator of OpenG? And is this maybe something I can do myself?

  2. First question,

    Is OpenG not available for LabVIEW NXG 5.0? When I try to install it via the NI Package Manager I see version 3. Then when I want to install it, I see that the NI Package Manager wants to install LabVIEW NXG 3.0... so I understand that OpenG is only available for LabVIEW NXG 5.0? 

     

    Second question, (cross post from https://forums.ni.com/t5/LabVIEW/LabVIEW-NXG-Polymorphic-amp-Malleable-VI-s/m-p/4047507#M1160978

    How can I create a polymorphic or malleable .giv in LabVIEW NXG? In NXG they are called "overloads"

    The only information I can find is: https://forums.ni.com/t5/NI-Blog/Designing-LabVIEW-NXG-Configurable-Functions/ba-p/3855200?profile.l...

    But it does not tell or share how to do this in LabVIEW NXG? Where can I find this information? Or am I just missing something?

     

    Images (w.r.t. first question)

     

     image.png.6bf0279de82cd71177439aa4b6601507.png

     

    image.png.5ef0c8e4534ef0ced6e7b9236e376c1b.png

  3. Offtopic:

    You should use randomized data for a fair representation. Maybe the algorithm performs a lot better or a lot worse for certain values. Maybe the functionality posted in the OP functions a lot better for very large values.

     

    I would benchmark with random data which represents the full input range that could be expected. Furthermore I would also do the for-loop around 1 instance of the function. Then store the timings in array. Compute the mean, median, variance, standard deviation and show maybe a nice histogram :-)

     

    What is also nice to do, is by changing the input size for each iteration, 2^0, 2^1, 2^2, 2^3,... elements and plot it to determine how the computation execution scales.

  4. For computer science I would like to recommend the book.

    T.H. Cormen, C.E. Leiserson, R.L. Rivest and C. Stein.

    Introduction to Algorithms (3rd edition)

    MIT Press, 2009.

    ISBN 0-262-53305-7 (paperback)

    It is made by a few proffessors from MIT, you can also see colleges

    ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-introduction-to-algorithms-sma-5503-fall-2005/

    ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-spring-2008/

    You can also download the video colleges as torrent, torrentz.eu/search?f=introduction+to+algorithms

    Further the following college notes may help you to get familiar.

    The first notes is about automata theory, it contains a few errors because it is not the most recent one because the author has decided to make a book from it. http://www.win.tue.n.../apbook2010.pdf. The second notes is about system validation and is a follow-up for automata theory http://www.win.tue.n...ted-version.pdf.

  5. Any type of solution would be good. The "Polymorphism Wizard"-idea is possible to implement without any changes to the rest of LabVIEW / G though, so there is less reason for it not to happen, perhaps.

    No I disagree. First your code size will increase which is totally unnecessary. Second to make the functions work for more complex datatypes you will also need to create a VI for that. When you then want to change something to your algorithm you have to update all VI's, this can indeed be done with a polymorphism wizard or some kind, but again what about the more complex datatypes I think it will make your code base unmanageable.

    Further I want to have a solution to the problem not a workaround which will for sure cost more time.

  6. I did some testing... everyone was using Match Pattern... I found a code snippet that was faster than that in every scenario I tested:

    post-5877-0-03813800-1343556340_thumb.pn

    I wasn't sure if it was only prefixes that were being sought ... all the test harnesses only created prefix test cases. If you want "anywhere in the string", then this doesn't help, obviously.

    I'm not suprised by it. I actually also build a version which used the string subset but then i thought of the case that the string "test" may not be on the front in each case, then I just choose to create a more general solution.

  7. What's your point? It still takes time to execute a memory allocation, regardless of how much you have. It is very normal for a memory-intensive VI to execute magnitudes faster on a second run.

    Yes you are totally correct. In the second run the memory is already allocated and hence does not need to be done anymore.

  8. I've realized there are a few caveats to keep in mind when looking at optimizing array manipulation functions.

    1. Optimization of code for one data type does not necessarily affect other data types the same way.

    2. The type of computer/processor can have a HUGE affect on how effective your optimized code performs.

    3. There are likely significant differences in the code performance between different versions of LabVIEW.

    Do some benchmark tests with different data types, different methods of processing your arrays, on different types of processors. I was astounded at the differences and realized there's a lot of low level processing going on behind the scenes that I would need to understand to know why performance differences existed between the different scenarios.

    1. This is true but I think differences may be very small I think. We can design 2 different algorithms, 1 algorithm which swaps elements in a array inplace, 1 algorithm which copies contents to another array. Case 1) when we swap elements in a array its just changing pointers, so constant time. Case 2) when we retrieve a element from a certain datatype of array A and copy it to array B it may indeed be the case that a algorithm implementation of case 1 is faster because copying and moving that element for that certain datatype in the memory consumes more then constant time, it would have a cost function which depends on the amount of memory the datatype consumes. (I'm not very sure if case 2, also works like this internally as I explained but I think it does?)

    2. This will always be the case.

    3. True but that is not our failure. I think its fair to assume that the algorithm runs the fastest on the newest version of LabVIEW and the slowest on first version of LabVIEW. The goal, imo, is to achieve a algorithm that minimizes the amount of operations and the memory space.

  9. I've kept the original's input and outputs, but the performance would have been better if the indexes output was dropped. For optimal performance it would be better to sort the input array and do a binary search for the duplicates. That would add another (optional?) step to recreate the original order of the input array, but still,- on large arrays it would have a huge impact on the performance.

    I would rather first sort the array and then iterate over the array, I think its faster because of how LabVIEW works. It still has the same theoratical speed, yours would be O(n*log(n) + n*log(n)) = O(2*n*log(n)) = O(n*log(n)) (because you do a binary search, log(n), in a loop thus multiply by n, becomes n*log(n). Mine is O(n*log(n) + n) = O(n*log(n)). But sure it would be faster then the current implementation which is O(n^2).

    -edit-

    Funny a simple implementation of the sort and then linear approach is still slower then yours. post-17774-0-24661000-1343393852_thumb.p

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.