-
Posts
81 -
Joined
-
Last visited
-
Days Won
6
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Adam Kemp
-
If you haven't already done so, please report this as a bug.
-
If you're using typedefs for your enums then the rules are pretty much the same: they work well within a project (we can update the uses of that typedef automatically), but they can cause problems for VIs that aren't in memory when the change is made. I still think you're ok if you just add to the end, though. This is tricky in any language, really. An enum is basically a symbolic representation of a numeric value. If you change the definition of that enum such that the symbols map to different values than they did before then you may break existing code. It's just one of the things you have to keep in mind when writing a reusable API. More magic. When debugging is enabled we emit a little extra code between nodes that handles things like probes and breakpoints. We could do what C debuggers do (use the processor's breakpoint mechanism), but that's difficult to do in a cross-platform way. Our current mechanism allows us to do the same thing on all platforms, and even allows us to do remote debugging and probes with the same mechanism. It's just more flexible. We try to optimize the callers based on what we can get away with toavoid copies of data, but that means that when things change in thesubVI then it sometimes affects the callers. For instance, if you have an input and output terminal of the same type and they weren't "inplace" before (meaning the compiler couldn't use the same spot in memory for both) but they are now, then the caller may need to change. Or it could be the opposite (they were inplace, but now they're not). It could also be that an input was modified inside the subVI before but now it's not (or the other way around). If you use dynamic dispatch or call by reference then you're somewhat shielded from these concerns (we can adapt at runtime), but you lose some of those optimizations. You may end up with copies of data that aren't strictly necessary.
-
Good point. Just to make sure Chad understands, though, you DO need the padding part in the cluster that you pass to the Call Library Node if you ever plan on passing an array of these things to the DLL. What you'll do is have two cluster types, one with the padding and the cluster of 32 U8s, and then another one without the padding and a String control instead of the U8 cluster. You'll just convert from one cluster type to the other one and return the one that works well with LabVIEW. That way the users of your API never see the ugly stuff that you needed just to call the C code. If you only ever pass one then the padding at the end is not important. Don't you just love low-level programming? Aren't you glad LabVIEW hides this stuff from you?
-
Magic. Actually, the .lvclass file also contains a history of edits you've made so that it can automatically mutate any old data on load. It's mostly transparent to the developer, and it makes mixing versions of classes much easier than in other languages. The only time you'll run into problems is if you have an older version of the class in memory and then later load some VIs that have data from a later version. I'm not sure exactly what happens in that case, but I think it would fail to load the VI. There are few if any changes you can make to a typedef that won't break your clients. With typedefs you pretty much have to have all the VIs that use that typedef in memory when you make the change, and then you have to save those VIs after making the change. Classes are far superior for this use case. In fact, that's the main advantage to using classes. It's called encapsulation: hiding the details of the inside of the class so that the clients won't notice when those details change. With enums I think you can add to the end of the enum, but removing or renaming an existing item will break the clients. If you really need to do something that would break a client then you could introduce a new type and a new VI which takes that type, and then deprecate the old type and the old VI. Rewrite the old VI to convert the old type into the new type and then forward the call to the new VI. Aristos explained this decently. We do compile directly to machine code, but if you're making edits then we have to recompile next time you run. Once we've done that, though, there won't be any delay next time you run it (unless you make more edits). If you then save those VIs or build an executable then next time they're loaded or when the .exe runs there won't be any compiling going on. They'll just run. We do compile and store in memory, but we also save the compiled code in your VI so that we don't have to recompile again next time you load. The one caveat to that is that sometimes changing a subVI causes the callers to need to recompile, so you might get a prompt to save a VI you never changed directly. That's because we recompiled that VI to adapt to the changes in its subVI(s), and we want to save that new code so you don't have to recompile again next time you load. As I mentioned before, dynamic dispatch VIs (and call by reference) do extra work at runtime in case the VI you're calling changed inplaceness, so that's a case where you don't need to worry as much about breaking callers. You just have to keep the possible performance impact in mind. Also, we compile directly to machine code and calls to the runtime engine. For simple functions we just compile machine code, but sometimes it's easier and more efficient to just compile machine code to call a function we wrote in C++. That function will be in the runtime engine. Almost all compilers do that, including MSVC and GCC. That's why they also need runtime libraries.
-
I'm assuming that the "name" field in the cluster is a sub-cluster containing 32 U8s. In that case you can use the Cluster to Array primitive to convert to a U8 array, and then the Byte Array To String primitive to convert it into a string. You would still need to have one internal cluster definition for talking to the DLL and then another cluster with a real string and no padding elements to expose to your VI's callers. Still, it's doable.
-
Yes, by default all VIs will go into the .exe using the same relative path between them as you had outside the .exe (you can, of course, tell the build spec to put specific VIs in specific locations using custom destinations). It finds the most common root folder of all the VIs and uses that as the top-level directory inside the .exe. If the VIs span drives on Windows then the top-level directories will be the drive letters. The one exception is vi.ilb, which gets its own top-level directory. This is to prevent common cases from having really deep hierarchies just for vi.lib VIs. This means if you call a vi.lib VI dynamically then you should write a wrapper VI for it in your .exe which you can then call using a relative path. The idea is to not have any code which has to check whether you're in an .exe or not in order to build different paths. The paths should always be the same.
-
This is indeed fixed in 2009. VIs in a built executable or DLL are now stored with the same directory structure as they have in the development environment. This also makes code to handle path differences for call by reference in built apps unnecessary, and there's a new VI (Application Path) that returns the directory containing the executable so that you can find files outside of it. Overall I think handling of paths in executables is much easier in LabVIEW now.
-
I don't know. I suggest posting a comment on the NI Labs posting to ask that.
-
The dimming is showing that the class's private data control has changes which have not been applied. As you are editing the control we don't want to constantly be recompiling and messing with other VIs because you could be doing multiple things at once, changing your mind, etc. Instead we let you make your changes and then when you save the class, close the private data control window, or choose File->Apply Changes in the private data control window then we will update all the other VIs that need to be updated. While the class is in this intermediate state it's considered broken because we don't yet know whether your changes will break other VIs, so it makes no sense to allow you to run them yet. Once you apply the changes then we can do a real check to see if anything broke, and if not then those constants stop being dimmed. None of this is JIT, though. This is all compile-time stuff. The child class is only broken because the parent class is broken, and the parent is only broken because it is in this intermediate state. As soon as you apply the changes then the parent class becomes unbroken, and thus the child class becomes unbroken as well. As long as you end up with a good (non-broken) parent class then nothing you change in the parent's private data will cause a recompile of the child class VIs. They don't even have to be in memory when you make the change, and they won't notice if they come into memory after the fact. The only thing you really need to worry about with editing the parent class is making sure that the child classes still meet all the requirements (i.e., dynamic dispatch VIs have the same connector pane). If you change the connector pane of a dynamic dispatch VI then you definitely have to modify your child classes.
-
Rolf, good point. I didn't notice that it was already padded correctly. Chad, if you're already having complications with byte swapping then I strongly suggest trying my approach from the second post (ignoring the pragma stuff). It will save a lot of time because it will allow you to just wire the cluster directly. What you might run into still is structs that contain pointers to other structs, and that's not easy to solve. Eventually it just becomes easier to create a wrapper DLL that works better with LabVIEW. You can also try the Import Shared Library feature. It's hit or miss since there are a lot of things C APIs can do that don't translate well to LabVIEW, but for a lot of simple interfaces (including the one in the first post here) it can do exactly the right thing automatically.
-
There is a new component available on NI Labs which allows you to do some GPU programming in LabVIEW. Check it out here: http://decibel.ni.com/content/docs/DOC-6064
-
JIT refers to compiling right before executing starting from a partially compiled binary. For instance, .Net code compiles to a bytecode format (not directly to machine code), but instead of a bytecode interpreter they compile the bytecode right before you run it into machine code optimized for your processor. LabVIEW doesn't actually do this kind of JIT compiling. The LabVIEW runtime engine doesn't do any compiling whatsoever, so if you have an interface parent class and a bunch of implementation child classes, and all of those are compiled and saved, then you don't have to worry about the runtime engine trying to recompile them. The only thing you might need to worry about is what we call "inplaceness". This is an optimization our compiler uses to allow wires to share the same place in memory, even if they pass through a node or subVI. For instance, if you have an array wire that you connect to a VI that just adds 1 to every array element, then it may be possible (depending on how you wrote it) for that subVI to use the exact same array as its caller without any copy being made. Dynamic dispatching (and call by reference) complicates this a bit because it could turn out that the specific implementation you end up calling at runtime has difference inplaceness requirements than the one you compiled with. We do handle this at runtime if we find a mismatch, so it can add some overhead. I think some people solve this by always using an inplace element structure (even for your empty parent class implementation) for your dynamic dispatch methods where you really want a certain input/output to always be inplace. This just prevents the mismatch from occurring.
-
There may be an open-source LabVIEW mysql API, but if not then you can always write one using the mysql C API (http://dev.mysql.com/doc/refman/6.0/en/c.html).
-
Actually, I think that you can simplify my examples quite a bit and use the original prototype you gave: extern "C" short CCONV acq_board_info(short cnt, MyStruct_info *info); Configure the CLN as follows: return type: Numeric, Signed 16-bit Integer param 1: Numeric, Signed 16-bit Integer param 2: Adapt to Type, Data Format "Array Data Pointer". When I tried this LabVIEW didn't properly generate the C file (it put "void arg2[]" instead of "MyStruct_info arg2[]"), but I think it would pass the right data. You just have to make sure the type of the struct matches the cluster and be sure to use the #pragma statements as above for alignment.
-
Rolf's suggestion is not the best way to do this. You actually can directly pass an array of clusters to C code and treat them as C structs, and you can even modify and resize that array. Here's how: Configure your call library node parameter as "Adapt to Type", and make sure that you have "Handles by Value" selected for the Data Format. Once that's done, wire up your array of clusters (if you start with an empty array, just create an empty array constant with the right type and wire that). Now, right-click on the call library node and choose "Create .c file...". That will generate a file containing C code with the proper C types for your array of clusters and the right function prototype. It will look something like this: /* Call Library source file */ #include "extcode.h" /* Typedefs */ typedef struct { int32_t Element; } TD2; typedef struct { int32_t dimSize; TD2 Cluster[1]; } TD1; typedef TD1 **TD1Hdl; void funcName(TD1Hdl arg1); void funcName(TD1Hdl arg1) { /* Insert code here */ } Obviously we don't do a good job of naming these types, so you should rename them first (ex: replace TD1 with MyArray and TD2 with MyStruct). Important: On 32-bit Windows you need to do one more thing to this code to make it work right in every case. Modify it like so: /* Call Library source file */ #include "extcode.h" /* Typedefs */ #if MSWin && ProcessorType != kX64 #pragma pack(push,1) #endif typedef struct { int32_t Element; } MyCluster; typedef struct { int32_t dimSize; MyCluster Cluster[1]; } MyArray; typedef MyArray **MyArrayHdl; #if MSWin && ProcessorType != kX64 #pragma pack(pop) #endif void funcName(MyArrayHdl arg1); void funcName(MyArrayHdl arg1) { /* Insert code here */ } The #if/endif and #pragma lines are the ones you need to add. This fixes alignment on 32-bit Windows because LabVIEW on that platform does not use the default alignment. If you don't do this then the C code will not interpret the data correctly in some cases. With that done, you just have to implement your code. Note that the function takes a MyArrayHdl (aka, a MyArray**). This is a "handle" to a LabVIEW array. LabVIEW arrays internally are structures containing an int32 dimension size (for each dimension) followed by an inline array of the elements. So, for instance, to sum all the elements in the example above you would write code like this: int32_t sum = 0; if(arg1) // empty arrays will have a NULL handle { for(int32_t i = 0; i < (*arg1)->dimSize; ++i) { sum += (*arg1)->Cluster[i].Element; } } That's all it takes to just read or modify the existing elements of the array of clusters. What about resizing? That's a bit trickier, but it's still possible. To resize the array you need to resize the handle, update the dimSize, and initialize any new elements (if you grew the array). When you resize the handle you have to calculate the size of the whole array in bytes. Here's the correct way to grow the array above by one element: MgErr err = mgNoErr; if( mgNoErr == (err = DSSetHSzClr(arg1, Offset(MyArray, Cluster) + sizeof(MyCluster)*numElems)) ) { (*arg1)->dimSize = numElements; // Initialize new elements } else { // error (probably mFullErr) } If you allow for an empty array to be passed in then you might get a NULL handle, which you can't resize. To allow for that, change your call library node by setting the Data Format of that parameter to "Pointers to Handles". This will change the type from MyArrayHdl to MyArrayHdl* (aka MyArray***). You would then work with it like this: MgErr err = mgNoErr; size_t arraySizeInBytes = Offset(MyArray, Cluster) + sizeof(MyCluster)*numElems; if(NULL != *arg1) { err = DSSetHSzClr(arg1, arraySizeInBytes); } else // empty array, must allocate { if( NULL != ( *arg1 = (MyArrayHdl)DSNewHClr(arraySizeInBytes) ) ) err = mFullErr; } if(mNoErr == err) { (**arg1)->dimSize = numElems; // Initialize new elements } else { // handle error } The last thing you have to do is to link your DLL to labviewv.lib (in the cintools directory of your LabVIEW installation, along with extcode.h). This gives you access to the DS* functions (and all the other functions in extcode.h). Make sure you use the labviewv.lib version. That's the one that's smart enough to make sure that it uses the correct versions of those functions even if you have multiple LabVIEW runtimes loaded in the same process. Now, obviously a lot of this is a bit tedious (much harder than using a simple C-style array), but it's not actually very difficult once you know how to do it. Don't be afraid to try it. It's easier than it looks, and it can make your LabVIEW/C interactions much more flexible.
-
LV8.6 application builder, shared library glibc problem
Adam Kemp replied to xavier30's topic in Calling External Code
I'm not sure the extra error checking is going to help in this situation. It looks like your code is really trying to free some memory twice. Either your list itself is being deleted twice (look at frame #12) or your list implementation is trying to free something inside it that has already been freed. Maybe you tried to copy a list and copied some pointers instead of doing a deep copy. In that case you take list A, copy it into list B, delete either list A or B, and then when you try to delete the other list you crash because you already deleted those pointers. Make sure you have a valid copy constructor and copy assignment operator in your list class. There shouldn't be any problem with using libstdc++.so.6 alongside libstdc++.so.5 as long as you don't try to take an STL object created by one and pass it to the other. I don't think that would happen in this case. The only real downside to using them both is that they take up more memory. I will mention that we have found bugs in libstdc++.so.5 that show up as double-frees (their std::string library implements copy-on-write in a not-thread-safe way), and we had to work around that. I believe that's fixed in libstdc++.so.6, though, so I doubt that's what's going on. -
LV8.6 application builder, shared library glibc problem
Adam Kemp replied to xavier30's topic in Calling External Code
Along those lines, the Call Library Node now has error checking options to detect errors like this and attempt to recover from them. You should try enabling the highest error checking level. See if that detects any problems. -
LV8.6 application builder, shared library glibc problem
Adam Kemp replied to xavier30's topic in Calling External Code
Have you tried working with NI support? You definitely should. If they can reproduce it then they should be able to figure out if it's a bug in LabVIEW or a bug in your code. If it's a bug in LabVIEW then they should help you get a workaround and file a bug report to make sure it gets fixed. From your attempt to debug, can you tell if the free call came from LabVIEW or your DLL? It's tempting to say "it must be LabVIEW because it works all these other ways", but memory corruptions, lack of initialization, and other kinds of memory-related bugs can be very sneaky. They can lurk for a long time and not cause any (noticeable) problems, and then suddenly some code around it changes and everything goes to hell. I'm not saying it's NOT a bug in LabVIEW, but I haven't seen this problem before, so I don't know. -
QUOTE (Val Brown @ Feb 26 2009, 05:58 PM) I don't know. I will mention the request to see benchmarks for specific toolkits as well. Thanks for the feedback.
-
QUOTE (Val Brown @ Feb 26 2009, 04:32 PM) That applies to anything which is not core LabVIEW. If the Signal Processing Toolkit improves performance then it can have its own benchmarks. If its performance improves because of LabVIEW itself getting better then you should see that in more general benchmarks.
-
QUOTE (Mark Yedinak @ Feb 26 2009, 02:21 PM) What kind of operations/manipulations? A lot of the focus on improving performance with large data structures has been on finding ways to avoid copying them, so if we do that right then operations on individual elements within them should be just as fast no matter how big the array is. Are there specific whole-array operations that you think are performance issues and change between LabVIEW releases?
-
QUOTE (Neville D @ Feb 26 2009, 12:31 PM) I'm specifically asking for benchmark ideas for LabVIEW, not drivers or extra toolkits. Working with deeply-nested structures is general enough to benchmark, but IMAQ algorithm performance is dependent on code that is independent of LabVIEW. Similarly I'm excluding things like DAQ performance or RT hardware. Those are things worthy of benchmarks, but those benchmarks should compare different versions of their respective products, not different versions of LabVIEW.
-
When we release new versions of LabVIEW we like to be able to compare its performance to previous releases of LabVIEW. As we make improvements to the compiler, the runtime, the execution system, and specific algorithms we often see that the same applications run faster in the newer version. However, we usually only do these comparisons for specific changes that we know we've made in order to highlight those improvements. We haven't really settled on any standard benchmarks that we compare with every release. We think that would be a good idea, but we want to ask the various LabVIEW communities which benchmarks they think would be valuable. Here are some questions that you may have answers to: When new version of LabVIEW comes out, and you are deciding whether or not to upgrade, what kinds of performance issues do you consider important in making that decision? What kind of general benchmarks would you like to see us run on every release of LabVIEW? Example benchmarks might be how long it takes to run a certain FFT or how fast we can stream data to or from disk.
-
I'm guessing that lvsqlite.dll is actually a wrapper that someone wrote to better interact with LabVIEW. That would explain why the functions don't match. You need to either find the source for the wrapper DLL and port it or write your own wrapper. I would start by talking to whoever wrote the SQLite wrapper.