Jump to content

Futures - An alternative to synchronous messaging


Recommended Posts

2 hours ago, shoneill said:

But, like others here, I don't get your point regarding the evils of OOP (Both in general and specifically in connection with this topic).

I bet a lot of the windows subsystems you are used to interfacing with may or may not comprise Objects.  What difference does this make?

Generally it is LabVIEWs implementation of OOP. The poor compile times, the complexity, the maintainability, the ballooning of the code base and the bugs. Classical LabVIEW is easy and arguably produces more robust code that is easy to understand for engineers rather than CS academics. I often talk about "pure" and "applied" programmers (an analogue to pure and applied mathematics) and Classical LabVIEW is great for applied programmers. OOP is unnecessary complexity in all but the most fringe use cases and it has sucked all the development resource of the language for features that could have benefitted how a vast majority of production code, that does real things, is written.

But no. Interfacing with the windows subsytems, that I'm used to never uses objects. It uses functions in dynamic libraries that take data arguments. Opaque pointers to objects is the quickest way to a GPF and in LabVIEW that means taking out the IDE too. It is only when you get to .NET that you forced to start interfacing with objects and I think you know how unimpressed I am with that-it's banned from my projects. If I want to use .NET I would use C#, not LabVIEW-one advantage of being a polyglot, so to speak, is that I'm not limited to one programming language and can choose the best tool for the job.

 

 

Edited by ShaunR
  • Like 1
Link to comment

Oh and BTW, I'm currently programming on FPGA targets with, you guessed it, lots and lots of objects.

I certainly don't see how I could be achieving similar flexibility and scaleability without utilising objects.  The fact that LabVIEW does a full hierarchy flat compilation of all objects (and thus all Dynamic dispatch calls must be uniquely identifiable) makes some very interesting techniques possible which simply can't be done anywhere NEAR as elegantly without objects.

Or is that not OOP in your books?

Link to comment
36 minutes ago, shoneill said:

Oh and BTW, I'm currently programming on FPGA targets with, you guessed it, lots and lots of objects.

I certainly don't see how I could be achieving similar flexibility and scaleability without utilising objects.  The fact that LabVIEW does a full hierarchy flat compilation of all objects (and thus all Dynamic dispatch calls must be uniquely identifiable) makes some very interesting techniques possible which simply can't be done anywhere NEAR as elegantly without objects.

Or is that not OOP in your books?

Show me the code.

Link to comment

Here are two small examples:

H2016-12-01 14_35_52-AO Handler - 3xSC5 Serialised.lvclass_Parameter Handler.vi Block Diagram on Nano.png

Here I have several sets of parameters I require for a multiplexed Analog Output calculation including Setpoints, Limits, Resolution and so on.  Each of the parameters is an object which represents an "Array" of values with corresponding "Read at Index" and "Write at Index" functions.  In addition, the base class implements a "Latency" method which returns the latency of the read method.  By doing this I can choose a concrete implementation easily from the parent VI.  If I need more latency for one parameter, I use my Dual-Clock BRAM interface with minimum latency of 3.  If I am just doing a quick test or if latency is really critical, I can use the much more expensive "Register" version with Latency of zero.  I might even go insane and write up a version which reads to and writes from existing global variables for each element in the array.  Who knows?

2016-12-01 14_36_29-Context Help.png

In this example I am using the base class "Latency" method to actually perform a calculation on the relative delays required for each pathway.  By structuring the code properly, this actually all gets constant folded by LabVIEW.  The operations are performed at compile time and my various pathways are guaranteed to remain in sync where I need them synced due to this ability.  Even the code used to perform calculations such as "Offset correction" can have an abstract class but several concrete implementations which can be chosen at edit time without having to completely re-write the sub-VIs.  I can tell my correction algorithm to "Use this offset method" which may be optimised for speed, area or resources. The code knows its own latency and slots in nicely and when compiled, all extra information is constant folded.  i just need to make sure the interface is maintained and that the latency values are accurate.

How to do this without LVOOP on FPGA?  VI Server won't work.  Conditional disables are unwieldy at best and to be honest, I'd need hundreds of them.

Link to comment
1 hour ago, shoneill said:

Here are two small examples:

H2016-12-01 14_35_52-AO Handler - 3xSC5 Serialised.lvclass_Parameter Handler.vi Block Diagram on Nano.png

Here I have several sets of parameters I require for a multiplexed Analog Output calculation including Setpoints, Limits, Resolution and so on.  Each of the parameters is an object which represents an "Array" of values with corresponding "Read at Index" and "Write at Index" functions.  In addition, the base class implements a "Latency" method which returns the latency of the read method.  By doing this I can choose a concrete implementation easily from the parent VI.  If I need more latency for one parameter, I use my Dual-Clock BRAM interface with minimum latency of 3.  If I am just doing a quick test or if latency is really critical, I can use the much more expensive "Register" version with Latency of zero.  I might even go insane and write up a version which reads to and writes from existing global variables for each element in the array.  Who knows?

2016-12-01 14_36_29-Context Help.png

In this example I am using the base class "Latency" method to actually perform a calculation on the relative delays required for each pathway.  By structuring the code properly, this actually all gets constant folded by LabVIEW.  The operations are performed at compile time and my various pathways are guaranteed to remain in sync where I need them synced due to this ability.  Even the code used to perform calculations such as "Offset correction" can have an abstract class but several concrete implementations which can be chosen at edit time without having to completely re-write the sub-VIs.  I can tell my correction algorithm to "Use this offset method" which may be optimised for speed, area or resources. The code knows its own latency and slots in nicely and when compiled, all extra information is constant folded.  i just need to make sure the interface is maintained and that the latency values are accurate.

How to do this without LVOOP on FPGA?  VI Server won't work.  Conditional disables are unwieldy at best and to be honest, I'd need hundreds of them.

So. Pictures are now code? I would forgive a nooby for that but, come on! 

FWIW. The Classical LabVIEW equivalent of dynamic dispatch is a case statement and at the top level would probably look identical to the first, if it was contained in a sub VI. Apart from that....very pretty and don;t forget to edit the wire colours for that added clarity ;) 

1 hour ago, shoneill said:

Note, each and avery object can be defined by the caller VI.  Each individual parameter can have a different latency as required.  For 10 parameters with 4 possibly latencies, that's aöready a possible million combinations of latencies.

Even if the caller has functions with with different terminals?

Link to comment

There isn't a snowball's hope in hell that you're getting the full code, sorry dude.

The "classical LabVIEW equivalent" as a case structure simply does NOT cut the mustard because some of the cases (while they will eventually be constant folded out of the equation) lead to broken arrows due to unsupported methods.  There's no way to have anything involving DBL in a case structure on FPGA.  Using Objects it is possible and the code has a much better re-use value.  Nothing forces me to use these objects only on FPGA.  I think you end up with an unmaintainable emalgamation of code in order to half-arsedly implement what LVOOP does for us behind the scenes.  But bear in mind I HAVE done something similar to this before, but with objects and static calls in order to avoid DD overhead.  Performance requirements were silly.

Regarding callers having different terminals..... That's a red herring because such functions cannot be exchanged for another, OOP or not.  Unless you start going down the "Set Control Value" route which uses OOP methods BTW.  My preferred method is front-loading objects with whatever parameters they require and then calling the methods I need without any extra inputs or outputs on the connector pane at all.  This way you can re-use accessors as parameters.  But to each their own.

Link to comment
6 hours ago, shoneill said:

Here are two small examples:

Nice examples in FPGA

WRT OOP LabVIEW vs Old School (OS) LabVIEW, I think both approaches are lacking in one way or another.  I think Actor/Messenger frameworks solve some of the problems of OS Labview but agree somewhat with Shaunr that they muddy the debugging waters significantly and strip some of the original benefits of WSYWIG OS LabVIEW away in the name of "pure" computer science principals.  Still waiting for the IDE to catch up with VR glasses so I can see dynamically launched VIs and message paths in a 3rd dimension during debugging.

If you look to our friends in the web and DB world similar struggles are happening.  For instance, relational databases by theory you should normalize to the 6th normal form but not many people do.  And in fact people got sick and tired of refactoring relational databases in general and completely abandoned them in many situations. 

 

3 hours ago, shoneill said:

Regarding callers having different terminals..... That's a red herring because such functions cannot be exchanged for another, OOP or not. 

Likely in LV 2017 it won't be a red herring anymore with "type enabled structures".  So in the future you may be able to have DBLs in FPGA without OOP and without breaking the code. Pure speculation on my part though.

Link to comment
18 hours ago, shoneill said:

There isn't a snowball's hope in hell that you're getting the full code, sorry dude.

Then I cannot rewrite in it classic LabVIEW and it is just an argument of "my dad is bigger than your dad".

All my arguments are are already detailed in other threads on here (which you refused to let me reference last time). You think it's great and I think not so much. I outline real implications of using LVPOOP (code size, compile times et. al.) and you outline subjective and unquantifyable measures like "elegance".

There is nothing that can't be written in any language using any philosophy. The idea that a problem can only be solved with OOP is false. It boils down to what is the efficacy of achieving the requirements and OOP is rarely, if at all, the answer. After 30 years of hearing the sales pitch, I expect better.

Link to comment

My dad, your dad?  Oh come on. :rolleyes:

The code is not mine to share.  It belongs to the company I work for.  You also said for me to "show" you the code, not "give" you the code.

I'll at least define elegance for you as I meant it.

I call the solution elegant because it simultaneously improves all of these points in our code.

  • Increasing readability (both via logical abstraction and LVOOP wire patterns) - This is always somewhat subjective but the improvement over our old code is massive (for me)
  • Reducing compile times (FPGA compile time - same target same functionality - went from 2:40 to 1:22 - mostly due to readability and the resulting obviousness of certain optimisations) - this is not subjective
  • Lower resource usage - again not subjective and a result of the optimisations enabled by the abstractions - from 37k FF and 36k LUT down to 32k FF and 24k LUT is nothing to sneeze at
  • Increasing code re-use both within and across platforms - this is not subjective
  • Faster overall development - this is not subjective
  • Faster iterative development with regard to changes in performance requirements (clock speed) - this is not subjective
Quote

There is nothing that can't be written in any language using any philosophy.

That's basically just a rehash of the definition of "turing complete".  So your statement is untrue for any language that are not turing complete (Charity or Epigram - thanks wikipedia).  It also leaves out efficiency.  While you could theoretically completely paint the sydney Opera house with a single hair, it doesn't make it a good idea if time or money restraints are relevant.  I mean, implementing VI Server on FPGA could actually theoretically be done, it just won't fit on any FPGA chip out there at the moment.....

Link to comment
7 hours ago, shoneill said:

Increasing readability (both via logical abstraction and LVOOP wire patterns) - This is always somewhat subjective but the improvement over our old code is massive (for me)

OOP obfuscates and makes code less readable. If you use dynamic dispatch, you even have to go through a dialogue and guess which implementation. Abstraction does not make code more readable, it hides code and program flow. It may seem more readable than you had originally but that is a relative measure and you know your code intimately.

7 hours ago, shoneill said:

Reducing compile times (FPGA compile time - same target same functionality - went from 2:40 to 1:22 - mostly due to readability and the resulting obviousness of certain optimisations) - this is not subjective

 Tenuous at best relying on an assumption that with your white-box knowledge of the code and object oriented expertise you were able to identify optimisations. Optimisation is a function of familiarity with the code and recognising patterns. It is not the paradigm that dictates this, it is experience.

7 hours ago, shoneill said:

Lower resource usage - again not subjective and a result of the optimisations enabled by the abstractions - from 37k FF and 36k LUT down to 32k FF and 24k LUT is nothing to sneeze at

Again, tenuous, claiming that abstraction enabled optimisation. See my previous comment.

7 hours ago, shoneill said:

Increasing code re-use both within and across platforms

Code reuse is a common OOP sales pitch that has been proven to be false. It is LabVIEW that is cross platform so of course your code is cross platform.

7 hours ago, shoneill said:

Faster overall development - this is not subjective

Again. This has been proven to be incorrect. The usual claim is that it is slower to begin with but gains are realised later in the project and so overall it is faster. Project gains are dictated more by discipline and early testing than paradigm. 1 week agile sprints seem to be the currently accepted optimum.

7 hours ago, shoneill said:

Faster iterative development with regard to changes in performance requirements (clock speed)

Another sales pitch. See my previous comment.

Link to comment
1 hour ago, shoneill said:

:rolleyes:

For someone against OOP, your posts are really abstract.

Thank you :D I guess I'm learning :P

Hows this for abstract?......

OOP can superficially describe "things" in the real world but is atrocious at modelling how "things" change over time.

stock-photo-phases-of-the-rotting-yellow

Damn. Now my "Fruit" base class needs a "has worm" property :P

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.