Jump to content

Adam Kemp

NI
  • Posts

    81
  • Joined

  • Last visited

  • Days Won

    6

Posts posted by Adam Kemp

  1. Good point. Just to make sure Chad understands, though, you DO need the padding part in the cluster that you pass to the Call Library Node if you ever plan on passing an array of these things to the DLL. What you'll do is have two cluster types, one with the padding and the cluster of 32 U8s, and then another one without the padding and a String control instead of the U8 cluster. You'll just convert from one cluster type to the other one and return the one that works well with LabVIEW. That way the users of your API never see the ugly stuff that you needed just to call the C code.

    If you only ever pass one then the padding at the end is not important. Don't you just love low-level programming? Aren't you glad LabVIEW hides this stuff from you? :)

  2. I'm assuming that the "name" field in the cluster is a sub-cluster containing 32 U8s. In that case you can use the Cluster to Array primitive to convert to a U8 array, and then the Byte Array To String primitive to convert it into a string. You would still need to have one internal cluster definition for talking to the DLL and then another cluster with a real string and no padding elements to expose to your VI's callers. Still, it's doable.

  3. Yes, by default all VIs will go into the .exe using the same relative path between them as you had outside the .exe (you can, of course, tell the build spec to put specific VIs in specific locations using custom destinations). It finds the most common root folder of all the VIs and uses that as the top-level directory inside the .exe. If the VIs span drives on Windows then the top-level directories will be the drive letters.

    The one exception is vi.ilb, which gets its own top-level directory. This is to prevent common cases from having really deep hierarchies just for vi.lib VIs. This means if you call a vi.lib VI dynamically then you should write a wrapper VI for it in your .exe which you can then call using a relative path. The idea is to not have any code which has to check whether you're in an .exe or not in order to build different paths. The paths should always be the same.

    • Like 1
  4. This is indeed fixed in 2009. VIs in a built executable or DLL are now stored with the same directory structure as they have in the development environment. This also makes code to handle path differences for call by reference in built apps unnecessary, and there's a new VI (Application Path) that returns the directory containing the executable so that you can find files outside of it. Overall I think handling of paths in executables is much easier in LabVIEW now.

  5. Rolf, good point. I didn't notice that it was already padded correctly.

    Chad, if you're already having complications with byte swapping then I strongly suggest trying my approach from the second post (ignoring the pragma stuff). It will save a lot of time because it will allow you to just wire the cluster directly. What you might run into still is structs that contain pointers to other structs, and that's not easy to solve. Eventually it just becomes easier to create a wrapper DLL that works better with LabVIEW.

    You can also try the Import Shared Library feature. It's hit or miss since there are a lot of things C APIs can do that don't translate well to LabVIEW, but for a lot of simple interfaces (including the one in the first post here) it can do exactly the right thing automatically.

  6. Actually, I think that you can simplify my examples quite a bit and use the original prototype you gave:

    extern "C" short CCONV acq_board_info(short cnt, MyStruct_info *info);

    Configure the CLN as follows:

    return type: Numeric, Signed 16-bit Integer

    param 1: Numeric, Signed 16-bit Integer

    param 2: Adapt to Type, Data Format "Array Data Pointer".

    When I tried this LabVIEW didn't properly generate the C file (it put "void arg2[]" instead of "MyStruct_info arg2[]"), but I think it would pass the right data. You just have to make sure the type of the struct matches the cluster and be sure to use the #pragma statements as above for alignment.

  7. I'm not sure the extra error checking is going to help in this situation. It looks like your code is really trying to free some memory twice. Either your list itself is being deleted twice (look at frame #12) or your list implementation is trying to free something inside it that has already been freed. Maybe you tried to copy a list and copied some pointers instead of doing a deep copy. In that case you take list A, copy it into list B, delete either list A or B, and then when you try to delete the other list you crash because you already deleted those pointers. Make sure you have a valid copy constructor and copy assignment operator in your list class.

    There shouldn't be any problem with using libstdc++.so.6 alongside libstdc++.so.5 as long as you don't try to take an STL object created by one and pass it to the other. I don't think that would happen in this case. The only real downside to using them both is that they take up more memory.

    I will mention that we have found bugs in libstdc++.so.5 that show up as double-frees (their std::string library implements copy-on-write in a not-thread-safe way), and we had to work around that. I believe that's fixed in libstdc++.so.6, though, so I doubt that's what's going on.

  8. Have you tried working with NI support? You definitely should. If they can reproduce it then they should be able to figure out if it's a bug in LabVIEW or a bug in your code. If it's a bug in LabVIEW then they should help you get a workaround and file a bug report to make sure it gets fixed.

    From your attempt to debug, can you tell if the free call came from LabVIEW or your DLL?

    It's tempting to say "it must be LabVIEW because it works all these other ways", but memory corruptions, lack of initialization, and other kinds of memory-related bugs can be very sneaky. They can lurk for a long time and not cause any (noticeable) problems, and then suddenly some code around it changes and everything goes to hell. I'm not saying it's NOT a bug in LabVIEW, but I haven't seen this problem before, so I don't know.

  9. QUOTE (Val Brown @ Feb 26 2009, 05:58 PM)

    Understood and both are what I've seen over the years (ie since LV5). I'm asking -- a bit clumsily -- whether benchmarks will be done on the toolkit as well.

    I don't know. I will mention the request to see benchmarks for specific toolkits as well. Thanks for the feedback.

  10. QUOTE (Val Brown @ Feb 26 2009, 04:32 PM)

    Does that apply to the Signal Processing Toolkit as well? For me performance there is critical. And FWIW, compile and release it for Mac, please. As I understand it, the issue really is just a compile... :rolleyes:

    That applies to anything which is not core LabVIEW. If the Signal Processing Toolkit improves performance then it can have its own benchmarks. If its performance improves because of LabVIEW itself getting better then you should see that in more general benchmarks.

  11. QUOTE (Mark Yedinak @ Feb 26 2009, 02:21 PM)

    You may also want to consider things like processing time on large arrays operations and manipulation.

    What kind of operations/manipulations? A lot of the focus on improving performance with large data structures has been on finding ways to avoid copying them, so if we do that right then operations on individual elements within them should be just as fast no matter how big the array is. Are there specific whole-array operations that you think are performance issues and change between LabVIEW releases?

  12. QUOTE (Neville D @ Feb 26 2009, 12:31 PM)

    I'm specifically asking for benchmark ideas for LabVIEW, not drivers or extra toolkits. Working with deeply-nested structures is general enough to benchmark, but IMAQ algorithm performance is dependent on code that is independent of LabVIEW. Similarly I'm excluding things like DAQ performance or RT hardware. Those are things worthy of benchmarks, but those benchmarks should compare different versions of their respective products, not different versions of LabVIEW.

  13. When we release new versions of LabVIEW we like to be able to compare its performance to previous releases of LabVIEW. As we make improvements to the compiler, the runtime, the execution system, and specific algorithms we often see that the same applications run faster in the newer version. However, we usually only do these comparisons for specific changes that we know we've made in order to highlight those improvements. We haven't really settled on any standard benchmarks that we compare with every release. We think that would be a good idea, but we want to ask the various LabVIEW communities which benchmarks they think would be valuable.

    Here are some questions that you may have answers to: When new version of LabVIEW comes out, and you are deciding whether or not to upgrade, what kinds of performance issues do you consider important in making that decision?

    What kind of general benchmarks would you like to see us run on every release of LabVIEW?

    Example benchmarks might be how long it takes to run a certain FFT or how fast we can stream data to or from disk.

  14. I'm guessing that lvsqlite.dll is actually a wrapper that someone wrote to better interact with LabVIEW. That would explain why the functions don't match. You need to either find the source for the wrapper DLL and port it or write your own wrapper. I would start by talking to whoever wrote the SQLite wrapper.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.