Jump to content

Get LV Class Default Value speed issues


Recommended Posts

I have an application where I am loading plugins from disk and executing them.  Each plugin is a child class of my plugin class.  I am using the factory pattern.

During development, I started out by having a static plugin class object for testing purposes and then later switched to loading the plugin class default value from disk.

My architecture works by dynamically launching a generic plugin handler that then loads the required plugin class and dynamically launches a method in that class.  So, the handler and the plugin are disconnected from the main application.  They communicate using messages.

I am launching many of these plugins at the same time.  A common use case would be in the 100's (this is a test system)

When I switched from the static object to loading the classes, I noticed a significant slowdown, especially with higher #s of plugins (100+) loading at once.  I did an experiment to make all the plugins being loaded be the same so it should only incur the disk load of the class a single time.  When I compare this to the static plugin version, there is a 4x reduction in execution speed.

So, it seems that the function that gets the default value of a class is much slower and more resource intensive than using a static object, even if the class is already loaded into memory.

I also suspect that this function runs in the root loop, causing blocking issues.

 

Does anyone know of a way to speed this up or mitigate the slowdown?  In the past I used to cache refs of dynamically loaded plugins (before LVOOP) so I would not incur the load penalty.  There does not seem to be a way to do this here that I am seeing.

 

thanks for any ideas.

 

-John

Link to comment

It’s sister function “Get LV Class Path” is similarly glacial for no obvious reason.   As is “GetLVClassInfo” from the VariantType library.  I’ve wonders if the problem is just that they call functions running in the UI thread for some reason.  But it could also be root loop.

 

The only workaround I see is caching; store a set of default-value objects in a lookup table and check against this before calling “Get LV Class Default Value”.

 

I wish NI would put some effort into improving semi-crippled functions like these.

  • Like 1
Link to comment

The best way I've found to reduce the penalty is to incur it at start-up of the system when I've put everything into lookup tables (I've use variants for this). Start-up occurs infrequently in the systems I'm working with (every 6 months?), so the hit to production is minimal.

Link to comment

Thanks for the input.  I decided to create a object cache FGV.  This seems to alleviate the speed issues.  There is still a blocking issue since it is a singleton implementation but that is unavoidable.  I suppose I could store the cache in a DVR or SEQ and 'peek' it to check for a hit, but I am not sure it is worth the added complexity.  Has anyone timed these options for performance?

 

Anyways, attached is my simple object cache VI if you are interested.

 

-John

 

Class Object Cache.vi



ps. I prefer this implementation to a 'load at startup' option because it allows the cache to change based on user interaction or by data driven actions.  In my case, the user loads a script that defines what object classes will be needed.  This can change at any time.  I only incur the load penalty once per class that is actually needed, instead of loading every plugin class possible at startup.  This also allows for new classes to be added while the application is running without a restart.

Of course, if your application is time critical on every run, pre-loading will avoid the first run penalty.

Link to comment
Thanks for the input.  I decided to create a object cache FGV.  This seems to alleviate the speed issues.  There is still a blocking issue since it is a singleton implementation but that is unavoidable.  I suppose I could store the cache in a DVR or SEQ and 'peek' it to check for a hit, but I am not sure it is worth the added complexity.  Has anyone timed these options for performance?

 

I'd expect your implementation is about as good as you're going to get if you want to build your cache dynamically. The DVR/SEQ would have an block as well due to the refnum operations (no idea how efficient the locking mechanism for those are relative to VIs).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.