Jump to content

Source compatible vs. Binary compatible


Daklu

Recommended Posts

[Thinking out loud...]

I've been reading a book on developing APIs (Practical API Design - Confessions of a Java Framework Architect) that touches on the difference between source compatible code and binary compatible code. In Java there are clear distinctions between the two. If I'm understanding correctly, in Java source compatibility is a subset of binary compatibility. If a new version of an API is source compatible with its previous version it is also binary compatible; however, it is possible for an API to be binary compatible without being source compatible.

My initial reaction was that Labview's background compiling makes the distinction meaningless--anything that is source compatible is also binary compatible and anything that is binary compatible is also source compatible. In fact, I'm not even sure the terms themselves have any meaning in the context of Labview.

After thinking about it for a while I'm not sure that's true. Is it possible to create a situation where a vi works if it is called dynamically but is broken if the BD is opened, or vice-versa? I'm thinking mainly about creating VI's in a previous version of Labview and calling them dynamically (so it is not recompiled) in a more recent version.

Link to comment

I've been reading a book on developing APIs (Practical API Design - Confessions of a Java Framework Architect) that touches on the difference between source compatible code and binary compatible code. In Java there are clear distinctions between the two. If I'm understanding correctly, in Java source compatibility is a subset of binary compatibility. If a new version of an API is source compatible with its previous version it is also binary compatible; however, it is possible for an API to be binary compatible without being source compatible.

I think you have it backwards here - binary compatibility is usually a subset of source compatibility. An API which is source compatible but not binary compatible means that you can't run your existing binaries with the new API, but you can recompile your code without modifying anything. Binary compatibility means you can install your new API and your application continues to run without any changes. If a change is significant enough to break source compatibility then it almost definitely breaks binary compatibility as well, but it's easy to have a minor change that requires recompiling but no changes to the source.

My initial reaction was that Labview's background compiling makes the distinction meaningless--anything that is source compatible is also binary compatible and anything that is binary compatible is also source compatible. In fact, I'm not even sure the terms themselves have any meaning in the context of Labview.

After thinking about it for a while I'm not sure that's true. Is it possible to create a situation where a vi works if it is called dynamically but is broken if the BD is opened, or vice-versa? I'm thinking mainly about creating VI's in a previous version of Labview and calling them dynamically (so it is not recompiled) in a more recent version.

A better LabVIEW example might be LV 8.5.0 versus 8.5.1, and versus 8.6. Code written in 8.5.1 is source compatible with 8.5, but code written in 8.6 is not. None are binary compatible - a recompile (even though automatic) is always required, and the matching runtime version is necessary to run an executable. I don't think you ever get binary compatibility between LabVIEW versions, but I've never tried the situation you described. NI works hard to ensure source compatibility between versions by providing translations for functions that change - like the "Convert 4.x data" option on typecast - but without those translations, source compatibility would break.

Link to comment

I think you have it backwards here - binary compatibility is usually a subset of source compatibility.

Here's the relevant part from the book.

import java.awt.*;import java.util.*;/** Could be compiled on JDK 1.2, before java.util.List was created */public class VList extends List {  Vector v;}

This compiled without a problem in Java 1.1, but as soon as Java 1.2 appeared, the code snippet became invalid. Java 1.2 introduced new collection classes, among them java.util.List. As a result, the class name List became ambiguous in the preceding code snippet, as it can mean java.awt.List (as during the Java 1.1 compilation) as well as java.util.List. In short, adding new classes into existing packages isn't source compatible.

I believe imports statements in java are simply an edit time shortcut and the compiled code would be fully namespaced. Wouldn't that make that snippet binary compatible but not source compatible? The book goes on to say trying to maintain binary compatibility is a useful goal but spending a lot of time on source compatibility doesn't make much sense in Java, which indicates binary compatibility can't be a subset of source compatibility. (Not in Java anyway.) Maybe source and binary compatibility are independent?

None are binary compatible - a recompile (even though automatic) is always required, and the matching runtime version is necessary to run an executable. I don't think you ever get binary compatibility between LabVIEW versions...

I don't think that's correct. A recompile is done automatically if the vi is loaded in the dev environment, but in previous discussions I've been told the run-time engine doesn't do any compiling. Without a recompile the vis must be binary compatible.

This is a bunny with a pancake on its head moments.

I have no idea what that means (maybe that's the point?)... but it makes for an interesting mental picture...

Link to comment

I don't think that's correct. A recompile is done automatically if the vi is loaded in the dev environment, but in previous discussions I've been told the run-time engine doesn't do any compiling. Without a recompile the vis must be binary compatible.

There's no guarantee that between the compiled code of helloworld.vi in 8.5 isn't radically different from the compiled code of helloworld.vi in 8.6, given the number of changes (optimizations, added/removed nodes, low-level stuff in general) between versions.

For the sake of argument, assume there is binary compatibility - those VIs won't run in any different RTE without a recompile, so what's the point? Or perhaps that simply defines that there is no binary compatibility.

Link to comment

There's no guarantee that between the compiled code of helloworld.vi in 8.5 isn't radically different from the compiled code of helloworld.vi in 8.6, given the number of changes (optimizations, added/removed nodes, low-level stuff in general) between versions.

Doesn't matter. In principle binary compatibility, like source compatibility, only requires the entry and exit points to remain the same. What happens inside is irrelevant. (Assuming the behavior of the api has not changed.)

For the sake of argument, assume there is binary compatibility - those VIs won't run in any different RTE without a recompile, so what's the point?

If that's true then the following must also be true...

  • The Labview RTE's aren't backwards compatible. (Since compiling only occurs in the dev environment.)
  • If I distribute a code module that is intended to be dynamically called at runtime I have to distribute the correct RTE as well.
  • If I have an executable created in 2009 that dynamically calls code modules created in 8.2, 8.5, and 8.6, when the executable is run I'll end up with four different run-time engines loaded into memory. (Either that or the application simply won't run, though I have no idea what error it would generate.)

Or perhaps that simply defines that there is no binary compatibility.

Uhh.... what? Since each vi saves it's compiled code as part of the .vi file I'd be more inclined to think there is no source compatibility. (But that doesn't really work either...)

Ugh... that gave me a headache.

Link to comment

Consider the following:

A plugin architecture.

The main application has a typedef enum called 'Error Level'.

Version 1.0 of the application has 3 values in the typedef.

A plugin is build against version 1.0 and distributed as an LLB without block diagrams.

Now for version 1.1 the typedef is expanded with 200 new error levels.

This version won't load the old version of the plugin.

If the plugin has programmed all it's case structures with a 'Default' case the only thing needed is a recompile: (source compatible).

Ton

Link to comment

Code written in 8.5.1 is source compatible with 8.5, but code written in 8.6 is not. None are binary compatible

Actually, that's a bad example, as 8.5 and 8.5.1 were binary compatible and you did not have to recompile VIs written in one to run in the other. This actually presented a problem in fixes made in the code generation in 8.5.1, because if you had a VI which was buggy in 8.5, it remained buggy in 8.5.1 until you forced a compile (either by editing and saving or by force or mass compiling). The example is valid for older bugfix versions (and maybe newer as well).

A recompile is done automatically if the vi is loaded in the dev environment, but in previous discussions I've been told the run-time engine doesn't do any compiling.

That's correct. Version X of the RTE can only run VIs saved in that version of LV. As such, the points made here:

  • The Labview RTE's aren't backwards compatible. (Since compiling only occurs in the dev environment.)
  • If I distribute a code module that is intended to be dynamically called at runtime I have to distribute the correct RTE as well.
  • If I have an executable created in 2009 that dynamically calls code modules created in 8.2, 8.5, and 8.6, when the executable is run I'll end up with four different run-time engines loaded into memory. (Either that or the application simply won't run, though I have no idea what error it would generate.)

are also correct. For the last part, I'm pretty sure you would need to explicitly load the VIs in the correct RTE by running an application compiled in that version and using VI server to call that application (the app input on the Open VI Reference function), although I believe DLLs don't need this, presumably because they have a standard mechanism of using the correct run-time.

As for the actual question, what are you trying to get to? I don't know Java, but it seems to me that binary comptibility is akin to the different versions of LV and source compatibility is akin to not being able to open code written in a newer version or to having your code stop work if the API's interface was changed (like the report generation VIs in 8.6 or the config file VIs in 2009 which were put into libraries and therefore caused you to lose the ability to do certain things (which, admitedly, were not in the public APIs as represented in the palettes).

In any case, I don't think you have to borrow things from other languages. Some concepts apply and some don't.

  • Like 1
Link to comment

Actually, that's a bad example, as 8.5 and 8.5.1 were binary compatible and you did not have to recompile VIs written in one to run in the other. This actually presented a problem in fixes made in the code generation in 8.5.1, because if you had a VI which was buggy in 8.5, it remained buggy in 8.5.1 until you forced a compile (either by editing and saving or by force or mass compiling). The example is valid for older bugfix versions (and maybe newer as well).

Actually that is not true either. Generally, compiled VIs in a x.x.1 version can be loaded into x.x.0 runtime and vice-versa and executed. It could in some corner cases give strange (visual) effects or calculation artefacts but in general it works. But before LabVIEW 8.5 if you loaded a VI that was not EXACTLY the same version into the development system, it always got recompiled automatically.

For the last part, I'm pretty sure you would need to explicitly load the VIs in the correct RTE by running an application compiled in that version and using VI server to call that application (the app input on the Open VI Reference function), although I believe DLLs don't need this, presumably because they have a standard mechanism of using the correct run-time.

That is true if you try to load the VI through VI server. It is not true if you compile those VIs into a DLL and call that DLL through the Call Library Node. If the LabVIEW version the DLL is created with does match the caller version, the VIs in that DLL are loaded into the current LabVIEW system and executed there. If the versions do not match (not sure about the bug fix version difference here), the DLL is loaded through the according runtime system and run in that way.

  • Like 1
Link to comment

That's correct. Version X of the RTE can only run VIs saved in that version of LV. As such, the points made here... are also correct. For the last part, I'm pretty sure you would need to explicitly load the VIs in the correct RTE by running an application compiled in that version and using VI server to call that application (the app input on the Open VI Reference function), although I believe DLLs don't need this, presumably because they have a standard mechanism of using the correct run-time.

I didn't know that, and it's a pretty important bit of information if I'm building a framework to base all our applications on. (On the other hand, now I'm wondering why there's all the fuss over the non-palette vi's NI has made inaccessable. Those changes only prevent you from upgrading the source code to the next version--existing applications should work just fine.)

As for the actual question, what are you trying to get to?

Knowledge. I'm trying to understand the long term consequences of decisions I make today.

In any case, I don't think you have to borrow things from other languages. Some concepts apply and some don't.

I agree, but there is almost no information available describing best practices for creating reusable code modules or designing api's in Labview. So I have to turn to books talking about how to do these things in other languages and try to figure out which concepts do apply and how they translate into Labview. Hence the question about applying source compatibility vs binary compatibility concepts to Labview.

I don't know Java, but it seems to me that binary comptibility is akin to the different versions of LV and source compatibility is akin to not being able to open code written in a newer version or to having your code stop work if the API's interface was changed (like the report generation VIs in 8.6 or the config file VIs in 2009 which were put into libraries and therefore caused you to lose the ability to do certain things (which, admitedly, were not in the public APIs as represented in the palettes).

[More thinking out loud]

I think I agree with you. Getting back to what ned was talking about it makes more sense to me to approach it from the user's point of view.

  • Source compatibility means a new version of the code doesn't break the run arrow in VI's using the code.
  • Binary compatibility means a new version of the code doesn't break executables that link to the code. (Ned actually used the word binaries but I think that term is ambiguous in Labview due to the background compiling.)

But there's potentially another type of compatibility that does not fall into either of those categories. Maybe it's possible to create changes that don't break the run arrow, don't break precompiled executables that link to the code, but do generate run-time errors when executed in the dev environment? If that is possible, what would it look like?

Also, if we assume all our development is done in the same version of Labview, is it possible to create changes that cause source incompatibility or binary incompatibility, but not both? As I was poking around with this last night I did create a situation where a change to my "reuse code" allowed a precompiled executable to run fine but opening the executable's source code resulted in a broken run arrow. I have to play with it some more to nail down a repro case and figure out exactly what's happening, but I thought it was interesting nonetheless.

Link to comment

That is true if you try to load the VI through VI server. It is not true if you compile those VIs into a DLL and call that DLL through the Call Library Node. If the LabVIEW version the DLL is created with does match the caller version, the VIs in that DLL are loaded into the current LabVIEW system and executed there. If the versions do not match (not sure about the bug fix version difference here), the DLL is loaded through the according runtime system and run in that way.

Maybe I've been approaching the idea of runtime libraries all wrong. I have always assumed the ability to create DLLs was used only as a way to make Labview code modules available to other languages. Labview seems to have built-in capabilities that make DLLs unnecessary when used with executables built in Labview. Am I wrong? Is it advantageous to create DLLs out of my Labview runtime libraries instead of dynamically linking to the VIs?

Link to comment
...now I'm wondering why there's all the fuss over the non-palette vi's NI has made inaccessable. Those changes only prevent you from upgrading the source code to the next version--existing applications should work just fine.)

Some of us find those VIs really useful and use them directly in our code.

...there is almost no information available describing best practices for creating reusable code modules or designing api's in Labview.

This might not help you very much, but the new AAL course has some API design and implementation stuff in it. I haven't read that bit, but njhollenback wrote it, so I expect it's good.

Link to comment

Maybe I've been approaching the idea of runtime libraries all wrong. I have always assumed the ability to create DLLs was used only as a way to make Labview code modules available to other languages. Labview seems to have built-in capabilities that make DLLs unnecessary when used with executables built in Labview. Am I wrong? Is it advantageous to create DLLs out of my Labview runtime libraries instead of dynamically linking to the VIs?

No I think integrating LabVIEW modules as DLL into a LabVIEW application is a fairly roundabout way of doing business. It is possible and even works most of the times, but there are gotchas.

1) The DLL interface really limits the types of parameters you can pass to the module and retrieve back.

2) There is a LabVIEW => C => LabVIEW parameter translation for all non-flat parameters (array and strings) unless you use LabVIEW native datatypes (handles) AND the DLL is in the same version as the caller => Slowing down the call.

3) When the versions don't match there is also a proxy marshalling of all data paramaters necessary, much like what ActiveX does for out of process servers (but it is not the same marshalling mechanism as in ActiveX) since the DLL and the caller really execute in two different processes => Slowing down the call.

4) The DLL can not for the sake of it communicate through other means with the caller but through its parameter interfaces or platform system resources (events, etc). The notifiers, LabVIEW events, semaphores, etc. are (maybe shared and meaningful in the case of same DLL and caller version) but certainly completely useless if the DLL is in a different LabVIEW version than the caller.

There are probably a few more gotchas that I haven't thought of at the moment.

  • Like 1
Link to comment

If I'm understanding you correctly, you want to compile your reuse code into enclosed components which you will dynamically link to both from the IDE and from executable code. Maybe we can call them Dynamic Link Libraries. wink.gif

What's your motivation for doing this? I can see some advantages, but it seems to me that the headaches involved far outweigh the advantages.

That said, here's one possible implementation - You could probably use the Get VI Version method to get the version of the VI (although I'm not sure it will work when you run it in the RTE. You might need to save the version in the "DLL" somehow). You will then need to launch a background process with the correct version (RTE) and open and run the VI using VI server. As I said, I think the cons (not least of which are the perfomance issues you'll probably run into) outweigh the pros.

As for this:

I did create a situation where a change to my "reuse code" allowed a precompiled executable to run fine but opening the executable's source code resulted in a broken run arrow.

Are you sure they both ran the same version (e.g. the EXE would have its own copy by default unless you explicitly call the VI on disk by ref)? Also, one way you could probably break it is by having incompatible typedefs like Ton said. There are probably others, but these are corner cases. I can't say I personally see the need for running after them. Of course, if you are going to be playing with this stuff, you will want to know what can bite you.

  • Like 1
Link to comment

This might not help you very much, but the new AAL course has some API design and implementation stuff in it. I haven't read that bit, but njhollenback wrote it, so I expect it's good.

That's good to know. I'll have to keep my eyes open for a home learning kit.

No I think integrating LabVIEW modules as DLL into a LabVIEW applciation is a fairly roundabout way of doing business.

I can see some advantages, but it seems to me that the headaches involved far outweigh the advantages.

That's what I needed to know. Thanks for the feedback; now I won't waste a bunch of time experimenting with it.

If I'm understanding you correctly, you want to compile your reuse code into enclosed components which you will dynamically link to both from the IDE and from executable code. Maybe we can call them Dynamic Link Libraries. wink.gif

What's your motivation for doing this?

It's not that I want to do that specifically. I just recognize that is an available option but I had no idea what the pros/cons are of using that technique. You and rolf both indicated that using dll's in that way isn't a best practice. That's good enough for me. smile.gif

That being the case, if I want to have my executables dynamically link to reusable code libraries at runtime, what's the best way to distribute the libraries? Package them in .llb's? Any particular place they should go? Target computers may not have Labview installed so user.lib doesn't seem like a good option. If they're not in user.lib then the application developer needs to use the VI's from the runtime library, which takes me back to the original thought of distributing a package of components that are dynamically linked to from both the IDE and the executable, except that it's in a .llb instead of a dll. (I'm getting dizzy just thinking about it.) Or is it simply that dynamically linking to libraries at runtime is generally more trouble than it's worth? Maybe it's better to distribute reuse libraries as devtime tools and include all the necessary components as part of the .exe when compiling? It certainly is easier that way, but it would be nice to be able to update an application to take advantages of new library improvements without having to recompile the source code.

My long-range motivation is that I need to design a reuse library to support testing our products during the development phases. Unfortunately the product map is quite complex. Each product has multiple revisions, which may or may not use the same firmware spec. We have multiple vendors implementing the firmware specs in IC's and each vendor puts their own unique spin on how (or if) they implement the commands. Some devices require USB connections, some require serial connections, some support both. Of those that support both, some commands must be sent via USB, some must be sent via serial, and some can be sent through either. Of course, the set of commands that can be sent on a given interface changes with every revision. (And this is only the start...)

In short, it's a mess. I believe the original intent was to have a common set of functions for all revisions of a single device, but it has grown in such a way that now it is almost completely overwhelmed by execptions to the core spec.

Are you sure they both ran the same version (e.g. the EXE would have its own copy by default unless you explicitly call the VI on disk by ref)?

I made sure I was using the call by ref prim for precisely that reason. My original experiments are on my computer at work. I'll post a quick example if I can repro them at home.

Also, one way you could probably break it is by having incompatible typedefs like Ton said.

Nope, no typedefs. I've learned that using typedefs across package distribution boundaries is bad often ill-advised.

Of course, if you are going to be playing with this stuff, you will want to know what can bite you.

Exactly. The lack of readily available documentation of best practices means I have to spend a lot of time exploring nooks and crannies so I can develop my own list.

Link to comment

I don't have any experience in doing something like this (and I doubt many others do), but it seems to me that the cumbersome syntax needed to call that many VIs by reference makes it impractical (and that's before touching on LLBs compiled for another RTE). In Java, this works because the syntax doesn't change.

Some options, off the top of my head:

  • Look more closely into VIPM. You might be able to use it to manage the all the versions and maybe automate some of the deployment process yourself. Similarly, you could have a build server so that you don't have to manually build the new version when updating the reuse library.
  • Use more LVOOP. Dynamic dispatch can simplify the syntax (I think), but you will have to be careful about your dependencies and class hierarchies. Then again, you already know enough about that.
  • Compile the reuse code in each version into LLBs and distribute that for each application. If app X was built in LV 2009, place the LLBs which were compiled in 2009 in its folder. You could probably even do this without and dynamic linking by configuring the destinations when building the EXE. OpenG builder or VIPM might help you there.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.