Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 09/11/2012 in all areas

  1. Ah, I miss the good old days of the LAVA Lounge. Where have they gone? Let's liven it up a little! http://youtu.be/9bZkp7q19f0 <iframe width="640" height="360" src="http://www.youtube.com/embed/9bZkp7q19f0?rel=0" frameborder="0" allowfullscreen></iframe>
    2 points
  2. Are you running the executable on the same computer as the development environment? These issues can be really tough to debug. For me it always end up being unresolved dependencies which are either not distributed with the executable, or system level libraries that are unavailable. Basically the executable was successfully built and there's nothing wrong with it, but it's not until you try to run it that you find out something critical is missing. You may not have explicitly included external dependencies or done any dynamic loading, however are you sure the emulators don't do as much? Or other code you may use? One trick you can play is this: open your LabVIEW project and look at your Dependencies. If you don't see the dependencies in the project explorer, make sure the menu item Project: Filter View: Dependencies is checked. There you should see anything that your project uses which isn't explicitly included in your project. You can probably ignore the things in user.lib and vi.lib, but you may also see things like some DLLs or .NET assembly references here that need to be brought into the project. If you find dependencies, is it a system-level library? Stuff like kernel32.dll, or other Windows components. Chances are you won't need to worry about these, although they may restrict which operating systems you can use your executable on. If it's not a system-level dependency, make sure it gets included in your build. LabVIEW will usually pull any of these into a build automatically, but dynamic loading can leave no clear traceable path to them and easily result in them not being picked up when building. As an aside, a few stabs in the dark. Two issues I've repeatedly run into: Various flavors of the Microsoft Visual C/C++ runtimes not being available on the system. Despite what many people think, there is an RTE for the various MSVx incarnations (just like there is LabVIEW), they're just so common may people forget about it. If any LabVIEW code uses .NET (either your code or third party code), you'll be needing a .NET 2.0 compatible installation. That would be any of the 2.0, 3.0, or 3.5 installations as they're all additive and include 2.0-- but beware later versions do not.
    1 point
  3. We have suffered similar issues with our plugin-based application using packed project libraries. Any time we loaded more than one packed library that called the same "inlined" VI (say for instance, a VI in our reuse library) as another loaded packed library, it would break with a similar message. Individually they loaded fine. In the end we just turned off the inline feature on the reuse code and all was well. We haven't yet got around to thoroughly investigating and reporting the issue, but I might suggest experimenting with removing the inline settings and trying again.
    1 point
  4. No C does not have any memory penalty but if you cast an int32 into a 64 bit pointer and ignore the warning and then try to access it as pointer, you crash. LabVIEW is about 3 level higher in terms of high level language than C. It has among other things a strong data type system, a rather effective automatic memory management, no need to worry about allocating array buffers before you can use them and releasing them afterwards, and virtually no possibility to make it crash (if you don't work on interfacing it to external code) and quite a few other things. This comes sometimes at some cost, such as needing to make sure the buffer is allocated. Also the nature of LabVIEW datatypes seperates into two very different types. Those that are represented as handles (strings and arrays) and those that aren't. There is no way LabVIEW could "typecast" between these two fundamental types without creating a true data copy. And even if you typecast between arrays, such as typecasting a string into an array of 32 bit integers, it can't just do so, since the 32 bit integer array needs to have a byte length of a multiple of the element size, so if your input string (byte array) length is not fully dividable by four it will need to create a new buffer anyhow. Currently it does so in all cases, as that makes the code simpler and saves an extra check on the actual fitness of the input array size in the case where both inputs and outputs are arrays. It could of course go and check for the possibility of inlining the Typecast but if your wire happens to run to any other function that wants to use the string or array inline too, it needs to create a copy anyhow. All in all this smartness will add a lot of code to the Typecast function, for a benefit that is only in special cases achievable.
    1 point
  5. OK, I will ask again what I think is an obvious question here: Why doesn't NI include a native feature to serialize LabVIEW objects in an exchangeable way? (Alternatively, why doesn't NI provide enough access to allow a third party to develop such a framework?) For me, "exchangeable" definitely means in a manner that allows the data to shared between platforms. (Hence having "default data" without specifying the values of the default data is not allowed.) Moreover, using a more common format (such as "Simple XML" is appropriate.) Of course, including the object version number is only meaningful within LabVIEW, but this is useful within LabVIEW thanks to the LabVIEW objects capability to translate between versions. (Note: I recognize the versioning can't avoid all possible issues, but in practice I think that is rarely a practical issue.) I understand that for security reasons a developer may want to turn off the ability to serialize an object. To support that, I envision a checkbox to allow serialization (default = True) in the class properties dialog. I think XML is the best option for this for several reasons: 1) It is a common way to serialize objects in different environments. This means that I can exchange serialized data with Java applications, for example. 2) It is readable, albeit not easily readable, by human beings. (I actually don't want humans to read serialized data very often--and really never the operator, but it is good that they can on the rare occasion when they need to do so.) Why I think NI should implement this: 1) This is relatively straightforward for NI to do since NI can already serialize a class to the current (noninterchangeable) LabVIEW XML format. 2) Having this capability would greatly expand the application space of LabVIEW, since it would make it orders of magnitude easier to interface with nonLabVIEW applications. This is by far the most compelling reason to include this feature. 3) That there is a need for this is quite obvious, given the number of lengthy discussions just on LAVA about this topic. 4) The current situation, in which each class must contain specific code for serialization, is patently inefficient and nonsensical. 5) In other major languages meaningful object serialization is a given, and LabVIEW should include (indeed, must include) this functionality to be competitive. For the record, to serialize LabVIEW object data for communication within LabVIEW we use either the methods to flatten to string or to XML, and this works fine. I realize it's not theoretically 100% fool-proof, because of potential issues across different object versions, but in practice we use version control, so that we build applications using the same versions of interface code (usually), and we only have one large system, so we can pretty easily control our deployed applications. (I think that versioning an application could achieve the same.) In practice, we've never experienced a version problem with this approach, and it avoids having to write any class-specific code (which, again, a developer should definitely not have to do) to support serialization.
    1 point
  6. The typecast node will never use the input buffer for the output. In fact not only will it copy the entire buffer, on Intel processors it will visit each element to put it into big endian format then visit each element to put it back to little endian. If an API requires users to typecast significant amounts of data, I would consider that a deficiency in the API that should be redesigned. For cases where the array elements are the same size, it would be possible to write a DLL that took both types and swapped the array handles. Be sure to configure both parameters as pointer to array handle then swap the handles.
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.