Generally it is LabVIEWs implementation of OOP. The poor compile times, the complexity, the maintainability, the ballooning of the code base and the bugs. Classical LabVIEW is easy and arguably produces more robust code that is easy to understand for engineers rather than CS academics. I often talk about "pure" and "applied" programmers (an analogue to pure and applied mathematics) and Classical LabVIEW is great for applied programmers. OOP is unnecessary complexity in all but the most fringe use cases and it has sucked all the development resource of the language for features that could have benefitted how a vast majority of production code, that does real things, is written.
But no. Interfacing with the windows subsytems, that I'm used to never uses objects. It uses functions in dynamic libraries that take data arguments. Opaque pointers to objects is the quickest way to a GPF and in LabVIEW that means taking out the IDE too. It is only when you get to .NET that you forced to start interfacing with objects and I think you know how unimpressed I am with that-it's banned from my projects. If I want to use .NET I would use C#, not LabVIEW-one advantage of being a polyglot, so to speak, is that I'm not limited to one programming language and can choose the best tool for the job.