Jump to content

ShaunR

Members
  • Posts

    4,881
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. Thanks.It worked great but I don't think I'm going to go through the other 13 versions and installing it.
  2. Hmm. No 64 bit versions for 2009-2015. Thanks for the info.
  3. I've got it in 2019 but not in all the other versions. Is there a separate test install for each version?
  4. Potato, Potato Like i keep saying (and you agreed earlier) the Priority Queue is ordered PRIORITIES, not necessarily the elements (we disagree on this bit). The underlying implementation is irrelevant the name-it could even be a linked list.
  5. Moderator comment: Discussion started here. Nice. However this is mine.
  6. I don't think it does Cyclomatic Complexity Correct if I'm wrong because I rarely use it.
  7. Perhaps in the AF case. However it's also fairly common for the underlying implementation to be a stack (LIFO). That is effectively what the DQMH implementation is. Like "accidental"? You understimate the power of a marketing department, my friend
  8. Huh? I never said anything about an elderly person. Are you in the right thread?
  9. Well. My supermarket obviously isn't as isn't classy as yours. Flashing a Platinum Amex and then moving to the front would probably get you a bop on the nose.
  10. This I can get behind! 2&3 also map onto Whitebox and Blackbox testing. There is a test for 1 that I have seen (can't remember off hand what it was called) but it was mainly for c/c++ and counted things like the number of if/else or entries in Case statements to arrive at a figure for "complexity"
  11. Yes. I'm saying the "concept" of priority queues doesn't.
  12. As far as I'm aware. There is no guarantee (or expectation) that priority queues enforce ordering, only that higher priority messages will be executed before lower priority messages. I'm not familiar with the internal workings of the AF but if what you say is true (that order, at the same level, is guaranteed) then more of what you term "complexity" happens when that isn't required. An emergency stop springs to mind where you may not want the previous buffered messages to be executed, just the E-Stop. With the AF (based on your description) the user has to categorise different messages to different levels and I would suspect you woud also argue that is a "complexity". I wouldn't, however. Neither would I for the DQMH. I take your point about debugging and difficult to diagnose for the DQMH under certain conditions but it is a limitation of the design, and probably adequate for most scenarios if you don't make the guaranteed order assumption. The AF code also means practical debugging complexity due to the code paths which doesn't exist in the DQMH. So it's all 6 of one and half-a-dozen of the other to me. I expect the reverse order is more surprising to most people but it probably compiles and executes significantly faster than the AF one (just a hunch) so the limitations may have been a compromise to that. If you need a priority queue that guarantees order then that feature in the DQMH is not for you but I go back to my original statement that "here is no guarantee (or expectation) that priority queues enforce ordering, only that higher priority messages will be executed before lower priority messages." (this is a discussion that crops up in task schedulers too, by the way).
  13. OK. But good or bad wasn't the question. I was after the definition of "Accidental Complexity" and what you've just said brings me back to what I said originally Here I am saying that the underlying complexity of the framework is a necessary evil that has been "accepted and considered" rather than "accidental". What you seem to be confirming from my interpretation of your suggestion is that any hidden complexity is "accidental" in the context of the meaning and therfore a Framework is accidental complexity. Anyway. I've pretty much come to the conclusion that it's just more of a woolly buzz phrase like "Synergy" and "The Cloud". It obviously means different things to different people and I've a sneaking suspicion that it's meaning depends on where blame will be apportioned
  14. Exactly, but according to your definition it would be "accidental complexity". This is why I said in an earlier post that people confuse architecture and frameworks. I personally use a SOA architecture but within each service I may use QMH or Actors or whatever framework best achieves the goal. Many people choose one framework and fit everything inside it, making it their architecture. And lets be frank (or brian). Most LV frameworks are written and intended to be just that. So LVOOP is "accidental complexity"? (Just teasing) I don't really think it is a thing by these definitions when talking about complexity. Rube goldberg code exists but it isn't really "accidental". It is the product of brute forcing a linear thought process rather than iteratively designing one. Neither case is "accidental". Bolt-on bug fixes to cure the symptom rather than the cause might be argued as "accidental complexity" but that is just bad practice (of which I'm sure we are all guilty at some point). From the feeback it seems more of a weazel phrase for inelegant/inefficient code (except AQs take on it) in order to not admit it as such. I suspect this phrase is only used when things go wrong on a project and probably has an unquantifiable quality about it..
  15. Nah. Don't buy it. This is a change in requirements and there is no added complexity of the space itself. This is still a change in requirements and this is definitely an excuse for claiming value-added when no intention to add exits! Just because a user infers a feature that was never offered; it doesn't mean that the code is more complex. It just means the User (or Product Owner) has identified a new requirement. We were talking about code complexity growth and the term "accidental complexity" implies some sort of hidden cost - unkown or impossible to know, at design time (from what I can tell). This is why I asked for clarification. I've never heard of it and it just sounds like an excuse. By that definition, wouldn't the framework itself be an "accidental complexity" rather than the "considered and acceptable" complexity of a tried and tested template for design? Maybe I'm just getting too hung up on "accidental" and what it implies.
  16. What is "accidental complexity"? This sounds like an excuse given to management.
  17. It's a dataflow language with some functional and OO features. One of these is not like the others and you'll notice "state" is never mentioned in video.
  18. I think I should point out the terminology here. 2FA is a method of authentication (are you who you say you are). Oauth is a method of authorisation (do you have permission). For the latter, authentication is achieved by a 3rd party and log-in credentials are never sent to the service requiring permission, rather, the service requests permission from the third party that has already assertained your identity-outsourced authentication. On the surface it would seem OAuth is what you require but there is a caveat. Most systems around today are targeted towards gaining permission for an application to access a service. What would happen with OAuth is that you would add your application to the white list and the Adminstrator wouldn't have to press OK for your application at all. In fact. The service would think you are the Adminstrator. I'm not sure that is what you want either. Ignoring security for now... What I think you are asking is just to have an entry in a database somewhere with a request and the Adminstrator updates the database with permission. So at the point where the Operator wants to proceed, the application put a request to the server which searches to see if the request already exists or inserts a new request in the database. The Adminstrator then sends a request to the server to allow or deny the permission and the server updates the database with the permission flag. The application then sends the request again and the server checks the database entry to see if the request was allowed. I'm obviously glossing over a lot here because you will have noticed that it requires the Adminstrator to know there was a request and the application to know the Adminstrator responded. But I think this is basically what you are asking for. No?
  19. Even by that definition, learning the API interface may be the cost in a larger Venn diagram but then the API *is* the functionality and code to support that interface is still the cost (code size, memory size, load times, execution times, compile times etc). The over-arching benefit for an API is simplification of complexity and complexity always has a cost. If you are lucky, the underlying complexity cost grows linearly as the API grows. If you are unlucky it is exponential. At some point it becomes unusable either because the underlying code is too complex to warrant the cost (e.g. labview build times for classes) or the underlying code is unmanageable/unmaintainable often with side effects (e.g. the "God" Class). So I still maintain that the API is a benefit (that being reducing interface complexity and also reducing the learning required to achieve a goal) and, the underlying code is the cost of that benefit ... even from the consumers point of view. The ancillory benefits of an API are reuse and parallelism which can alleviate the consumers' project cost but are not guaranteed for any and all APIs and is dependent on the underlying code; usually by adding complexity (thread safety as an example).
  20. It's not so much "shiny web applications" (I just don't have the artistic flair for such things). It's more to do with having cross-platform, internationalised interfaces - which LabVIEW really sucks at. I don't know about you but even with trivial applications I seem to spend 70% of my time with UI property nodes just getting it to behave properly. I can completely bypass all that by separating the UI from the code, and DB/Websockets does that nicely with the added bonus of UTF8 support in the UI.
  21. Nah. Data to DB or websockets then browser interfaces (Javascript). This is what I do with LabVIEW mostly because, lets face it, the LabVIEW UI is no Gigi Hadid either. Once you go that route. It doesn't matter what language you use on the back-end (or which machine it's running on) and if you look at full time T&M jobs in the UK. They are pretty much all Python with Jenkins experience running on Linux. The UK LabVIEW market has been reduced mainly to turn-key automation and then usually only where they already have a historic LabVIEW investment. But we diverge...
  22. Indeed. The "mount your friends" approach is always problematic and I'm probably more eager than you for native solutions (my aversion to intermediary DLLs for example). I'm fortunate that I'm usually in the position to dictate the database to use and would choose MySQL over SQL Server every time. That will only get worse with NXG. IMO, this is yet another reason to choose Python over LabVIEW for Test and Measurement.
  23. NTLM is OK and I expect I could get the SSL/TLS with a little effort (at worst, using memory BIOs and hand-cranking it through). The rest, though.... Thanks for the info, very interesting. It strikes me to be much more than a few weeks concerted effort to just get a base-line native LabVIEW implementation (not a learning execrise/fun project). I doubt anyone would be prepared to pay for it when there are other off-the-shelf solutions for SQL (MySQL. MariaDB) that are already available in LabVIEW - It looks far more hassle than it's worth just to get something on RT. What are your thoughts on Linux ODBC?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.