Jump to content

[CR] CRC computation reuse library, malleable for 8/16/32/64-bit LV2023


Recommended Posts

Nice.

This is probably one of the only times (1 in 1,000,000) I would suggest an xnode may be preferable-and specifically for the lookup tables which should always be more performant.

With an xnode, one can pre-calculate the tables at design time based on the type and save the cost of generating the table at first run run. This will also mean that the calculation will be constant time whether first called or later. xnodes are tricky and complicated beasts so I could understand not wanting to go down this hairy rabbit hole littered with rusty nails.

Speaking of the table generation; I noticed you have the VI set to reentrant clones. I think this will be  a problem when you run multiple instances as the shift registers may not contain the values you expect per instance.

image.png.d155925409b1f826d68d0fd96a7d7d9d.png

Link to comment

Shaun,

I would always assume that the LUT code path is more performant that the eight shifts per byte of the brute force method, but only after you've amortized the execution cost of building the table (which is specific to the polynomial input value).  So for one-off calls to compute CRC on a sufficiently short array of bytes (and since it is a well-known pattern), I left the brute force BD code.  I set the LUT gen to only trigger on first call OR a change in poly between calls and for the table to live in a USR.  VIMs have to be set to reentrant - are you thinking that "shared" (vs. "preallocated") clones will not handle the "First Call?" primitive correctly here?  I hadn't really thought that through; I could certainly set the code to be preallocated.  Somehow I thought it didn't matter here, since I thought VIMs just really embedded their code into their caller's BD.  (But it's late and I'm too tired to ponder this much right now.)

As far as the Xnode suggestion - I've never actually created one.  I'm uncertain how that would work - the code to gen the LUT at edit time would need to know the poly value, which I left as a part of the normal parameterization (poly, init, and input and output reflection, output XOR), so it would only be known at execution time.  Again, if I'm thinking straight.

Thanks for the quick feedback, BTW!

Dave

To "X, the unknown": sure, I can back-save the whole thing, or you can put it up on the version-conversion forum if you need it that way immediately.  I find the LV back-save process really cumbersome in that it seems to mangle dependency paths in a non-intuitive way.  I'll try to get around to it eventually.

Edited by David Boyd
Link to comment
7 hours ago, David Boyd said:

are you thinking that "shared" (vs. "preallocated")

Yes. that's exactly what I am thinking (but poorly communicated). This is a common known gotcha for VI's with shift register memories (not the first call primitive per say).

It will probably only bite you when you have multiple instances and where it's being used with different CRC types with different integer lengths. 

Here's an example:

sub VI set to Preallocated (what we expect-11 more than the intialise value)

image.png.b53f1c7cc68ed6549779cf8bfd2474cb.png

sub VI set to Shared:

image.png.79dbcce4bbb46d57e1c7b678dba85376.png

If you run continuously, you will see other values as different threads become available at different times.

 

rentrant clones.zip

Link to comment

Shaun, you rock!  Thanks also for quickly turning around that anon's request for a back-save.

I'm still a bit mystified about the execution settings for the VIM.  The editor breaks the VIM if you attempt to enable debugging, or if you forget to set it to inline, or leave it non-reentrant, but it DOES permit either reentrancy setting. Since VIMs are, by definition, inlined into their caller's diagram and adapted at edit time to the caller's datatypes wired in, I can't really see how the "shared" vs. "preallocated" clone selection can make a difference.  It would seem that the content of any USR within the VIM would depend upon the caller's reentrancy settings at that point.  

That being said, I did look at the online tutorial on building malleable VIs and noticed that "preallocated" is explicitly called for:

Quote

You must configure the malleable VI to be inlined by selecting File»VI Properties»Execution, enabling the Inline subVI into calling VIs and Preallocated clone reentrant execution options, and disabling the Allow debugging and Enable automatic error handling options.

 

So, um, oops.  I hope you corrected the reentrancy settings for the two malleables in the library when you backsaved to 2021.  I'm certainly going to correct that in my own library.

 

Thanks again!

Dave

Link to comment

Also, sorry X___, I realized belatedly that you're NOT an anonymous user.  A few of my elder brain cells were recalling old days where NI's LabVIEW forum reflected comp.lang.labview on Usenet, and postings from there ended up with a default anonymous identity.  (Perhaps that's what inspired you?).  No offense was intended.

Dave

Link to comment
23 hours ago, David Boyd said:

Also, sorry X___, I realized belatedly that you're NOT an anonymous user.  A few of my elder brain cells were recalling old days where NI's LabVIEW forum reflected comp.lang.labview on Usenet, and postings from there ended up with a default anonymous identity.  (Perhaps that's what inspired you?).  No offense was intended.

Dave

I date back from the pica.army.mil email list but never used Usenet.

I am so not anonymous that I had a bunch of members of that mailing list drop by in one of my old labs say hi and gift me with a NI screw driver (this was back in the day where NI was gifting their users with goodies and trying to cultivate their relations with universities, just to give an idea of how far back I am talking about).

I was already pestering about LabVIEW shortcomings (to my defense, that was pre-undo).

Nothing changes...

  • Haha 1
Link to comment

I first started developing in LabVIEW (v4, pre-undo) in '97 and joined info-LabVIEW shortly thereafter, and that was already several years after Tom Coradeschi started it, so I recall feeling like I was late to the party.

Wow! I just remembered, it's info-LabVIEW's birthday today (February 14, 1991)!

And I think my toolbox at home still has an NI screwdriver with their Mopac address printed on it.

Cheers!

Link to comment

XNode enthusiast here.  I've never really minded the fact that the LUT needs to be generated on first run but maybe I'm using these CRCs often many times in a call so the first one being on the slow side doesn't bother me.  If you did go down the XNode route, the Poly used could still be specified at edit time.  It could be a dialog prompt, similar to something like the Set Cluster Size on the Array to Cluster function where the user specifies something like the Poly, and reflective settings used.  Then the code generates the LUT and uses that.  You could have it update the icon of the XNode to show the Poly and reflective settings used too.  You could still have an option of specifying the value at run-time, which would then need to be generated on first call.  Again I personally don't think I'd go this route, but if you do you can check out my presentation here.  It references this XNode Editor I made.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.