Jump to content

Recompiled VIs + SCC


jgcode

Recommended Posts

I ran into a nasty issue where a build for Real_Time was returning incorrect data!

The solution was to recompile the top level VI and save all callers (have to save as its RT).

The problem arose as most VIs were read-only as they were in SCM.

What is the standard protocol for this from SCC point-of-view?

Should I check out the VIs, compile/build/archive-build, then check them back in and tag them as "recompiled" or similar?

Or should I check out the VIs, compile/build/archive-build, then get last revision to overwrite the new local copies?

What do you guys do?

Cheers

JG

Link to comment

Should I check out the VIs, compile/build/archive-build, then check them back in and tag them as "recompiled" or similar?

That's what I do. Check out the entire project, recompile, check it all back in. We have lots of checkins that are nothing more than recompiles.

Link to comment

That's what I do. Check out the entire project, recompile, check it all back in. We have lots of checkins that are nothing more than recompiles.

Its what I do as well. For the comment* I'll generally say "Recompile Only: some reason ". Where the some reason would describe why the recompile was necessary (e.g. modified a typedef control, etc...)

~Dan

* My SCC tool is configured to require comments for all check-ins/commits.

Edited by Dan DeFriese
Link to comment

Do you have a rule/protocol to always recompile before each build?

And if you do, have you automated this process (e.g. for a nightly build?)

Nope, and nope. In general I don't think you need to recompile prior to a build as I would assume the build process includes a recompile. I could be wrong though... it's just what I've always assumed.

Our tools are for internal customers to support product development, so our entire QA process is somewhat on the light side. ("Unit testing? Why bother?") We don't typically produce large applications that need nightly builds.

Link to comment

I ran into this a ton back at TI wrt wanting to not have the system need to compile anything at load time....that's just sloppy.

But then I had to balance against checking in a file a ridiculous number of times for re-compile.

In the end I just gave into having a ton of useless check in's

I think this is something worth someone @ NI addressing our stance on (which I really don't know but figure it'll be "just check in a bunch"

I'll try to pass this onto some other folk, but no guarantees.

Keep bitching about it though and maybe some proposed solutions

my 2c

Link to comment

Nope, and nope. In general I don't think you need to recompile prior to a build as I would assume the build process includes a recompile. I could be wrong though... it's just what I've always assumed.

Well thats what I thought/assumed too but I just had the time of my life finding this out!

T? But I was using Scan Engine (SE) on LabVIEW Real Time (RT) and until I compiled the Variable nodes returned either an error or the wrong value, source ran fine tho!

Don't know if its an issue with SE/RT, or if I should be forcing a recompile before each build. But it raised alot of questions

I ran into this a ton back at TI wrt wanting to not have the system need to compile anything at load time....that's just sloppy.

But then I had to balance against checking in a file a ridiculous number of times for re-compile.

In the end I just gave into having a ton of useless check in's

Yes, I wanted to know if I could avoid this.

I think this is something worth someone @ NI addressing our stance on (which I really don't know but figure it'll be "just check in a bunch"

I'll try to pass this onto some other folk, but no guarantees.

Keep bitching about it though and maybe some proposed solutions

Cheers, much appreciated :cool:

Link to comment

Thanks guys,

Do you have a rule/protocol to always recompile before each build?

And if you do, have you automated this process (e.g. for a nightly build?)

Cheers

JG

I have written a little utility that does some of this.

Our process is (using ClearCase as our SCC) to check out and make changes on all files that we are really changing and then check those files back into the SCC, ignoring the recompiles.

So if I then load my top level VI I get a sometimes very long list of files that need recompiles. With the top level VI open I run a tool that finds all VI with unsaved changes in memory, I stumbled upto a property node "VI Modification bitset" , I then automatically do a ClearCase checkout of all these files, re-save them and do a ClearCase checkin with a common checkin comment like "File has been automatically update due to recompile".

Dannyt

Link to comment

I ran into this a ton back at TI wrt wanting to not have the system need to compile anything at load time....that's just sloppy.

But then I had to balance against checking in a file a ridiculous number of times for re-compile.

In the end I just gave into having a ton of useless check in's

I think this is something worth someone @ NI addressing our stance on (which I really don't know but figure it'll be "just check in a bunch"

I'll try to pass this onto some other folk, but no guarantees.

Keep bitching about it though and maybe some proposed solutions

my 2c

I do NOT use SCC at this time, but I've just gone through a somewhat painful experience because of inplaceness. I believe that inplacess may be part of this SCC problem.

Specifically, I have an llb containing several dialogs that were originally written in LV 5.0 that have only been recompiled up to 7.0 and then 8.6. The dialog vis are used to load and display BMP and JPG files, and implement error in/error out terminals.

There were no BMP functions in LV 5.0, so the group used some info-LabVIEW based BMP functions. The BMP functions output to an intensity graph, and the JPG functions to a picture. The dialog VIs would hide and show either the picture or intensity graph based on the extension of the image, but there was no data flow between the property nodes causing a race condition, the image control would randomly be visible or hidden.

I proceeded to refactor the VIs. I changed them from polling of buttons to an event structure, replaced the info-LabVIEW library with the NI supplied BMP functions and was able to eliminate the intensity graph. I was also able to add support for PNG while I was at it.

The problem was the error clusters. Before refactoring, the dialog VIs passed through the error cluster (through various cases in a state machine using shift registers); BUT the error cluster never passed through any functions, property nodes or sub-vis. After refactoring, I had bunches of calling code to these VIs that required a recompile.

OK, I figured it had something to do with changing the front panel (removed an indicator). I refactored again, and left the intensity graph hidden, off the front panel visible area. SAME PROBLEM, recompiles required. WTF?!

I determined the problem was that I had wired the error cluster through nodes that could modify the contents of the error cluster.

I had not added, removed or changed any controls or indicators on the FP. I had not modified the connector pane. It was the error cluster. See the attached example. Open the top level VI. No recompile required. Open the sub-vi and in case 1 modify the error cluster wire to pass from the file close to the shift register. Save. Exit LV. Open the top level VI and it will show an asterisk. Open the sub-vi and restore the error cluster wire in case 1 as it was. Save the sub-vi and exit LabVIEW. Open the top level vi and recompile is not required, even though the sub vi has been modified multiple times.

This brings out an important point. If you use template VIs that include error in/ error out handling where there is CURRENTLY no change to error in/ error out, you better modify your template to create artificial data changes or the inplaceness algorithm will bite your @ss. Without an artifical data change, one bit of functionality that you wire up to the error cluster will cause the callers to to need to be recompiled. This appears to become more of a problem with SCC.

There was a discussion about error handling style a year or two ago. I remember stating that if A VI contains no nodes that modify the error cluster, then I don't personally add an error in / error out to the connector pane. I guess I was smarter than I thought!

Quote of the day: Inplaceness is synonymous with insidiousness ...

EDIT: Artificial data change is my own term (unbundle and bundle). When I re-read my post, I can't help but think that the compiler is getting better and smarter, and we're getting sneakier and sneakier. How long will it be before the compiler can figure out an artificial data change?

Recompile Problem.llb

Link to comment
  • 1 year later...

I do NOT use SCC at this time, but I've just gone through a somewhat painful experience because of inplaceness. I believe that inplacess may be part of this SCC problem.

...

Quote of the day: Inplaceness is synonymous with insidiousness ...

EDIT: Artificial data change is my own term (unbundle and bundle). When I re-read my post, I can't help but think that the compiler is getting better and smarter, and we're getting sneakier and sneakier. How long will it be before the compiler can figure out an artificial data change?

So today I learn that LabVIEW has a function that forces a data copy (artificial data change).

The "Always Copy" function in the memory control palette should make my merge VI safer, need to test this out...

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.