Jump to content


  • Content Count

  • Joined

  • Days Won


Everything posted by ShaunR

  1. Perhaps in the AF case. However it's also fairly common for the underlying implementation to be a stack (LIFO). That is effectively what the DQMH implementation is. Like "accidental"? You understimate the power of a marketing department, my friend
  2. Huh? I never said anything about an elderly person. Are you in the right thread?
  3. Well. My supermarket obviously isn't as isn't classy as yours. Flashing a Platinum Amex and then moving to the front would probably get you a bop on the nose.
  4. This I can get behind! 2&3 also map onto Whitebox and Blackbox testing. There is a test for 1 that I have seen (can't remember off hand what it was called) but it was mainly for c/c++ and counted things like the number of if/else or entries in Case statements to arrive at a figure for "complexity"
  5. Yes. I'm saying the "concept" of priority queues doesn't.
  6. As far as I'm aware. There is no guarantee (or expectation) that priority queues enforce ordering, only that higher priority messages will be executed before lower priority messages. I'm not familiar with the internal workings of the AF but if what you say is true (that order, at the same level, is guaranteed) then more of what you term "complexity" happens when that isn't required. An emergency stop springs to mind where you may not want the previous buffered messages to be executed, just the E-Stop. With the AF (based on your description) the user has to categorise different messages to different levels and I would suspect you woud also argue that is a "complexity". I wouldn't, however. Neither would I for the DQMH. I take your point about debugging and difficult to diagnose for the DQMH under certain conditions but it is a limitation of the design, and probably adequate for most scenarios if you don't make the guaranteed order assumption. The AF code also means practical debugging complexity due to the code paths which doesn't exist in the DQMH. So it's all 6 of one and half-a-dozen of the other to me. I expect the reverse order is more surprising to most people but it probably compiles and executes significantly faster than the AF one (just a hunch) so the limitations may have been a compromise to that. If you need a priority queue that guarantees order then that feature in the DQMH is not for you but I go back to my original statement that "here is no guarantee (or expectation) that priority queues enforce ordering, only that higher priority messages will be executed before lower priority messages." (this is a discussion that crops up in task schedulers too, by the way).
  7. OK. But good or bad wasn't the question. I was after the definition of "Accidental Complexity" and what you've just said brings me back to what I said originally Here I am saying that the underlying complexity of the framework is a necessary evil that has been "accepted and considered" rather than "accidental". What you seem to be confirming from my interpretation of your suggestion is that any hidden complexity is "accidental" in the context of the meaning and therfore a Framework is accidental complexity. Anyway. I've pretty much come to the conclusion that it's just more of a woolly buzz phrase like "Synergy" and "The Cloud". It obviously means different things to different people and I've a sneaking suspicion that it's meaning depends on where blame will be apportioned
  8. Exactly, but according to your definition it would be "accidental complexity". This is why I said in an earlier post that people confuse architecture and frameworks. I personally use a SOA architecture but within each service I may use QMH or Actors or whatever framework best achieves the goal. Many people choose one framework and fit everything inside it, making it their architecture. And lets be frank (or brian). Most LV frameworks are written and intended to be just that. So LVOOP is "accidental complexity"? (Just teasing) I don't really think it is a thing by these definitions when talking about complexity. Rube goldberg code exists but it isn't really "accidental". It is the product of brute forcing a linear thought process rather than iteratively designing one. Neither case is "accidental". Bolt-on bug fixes to cure the symptom rather than the cause might be argued as "accidental complexity" but that is just bad practice (of which I'm sure we are all guilty at some point). From the feeback it seems more of a weazel phrase for inelegant/inefficient code (except AQs take on it) in order to not admit it as such. I suspect this phrase is only used when things go wrong on a project and probably has an unquantifiable quality about it..
  9. Nah. Don't buy it. This is a change in requirements and there is no added complexity of the space itself. This is still a change in requirements and this is definitely an excuse for claiming value-added when no intention to add exits! Just because a user infers a feature that was never offered; it doesn't mean that the code is more complex. It just means the User (or Product Owner) has identified a new requirement. We were talking about code complexity growth and the term "accidental complexity" implies some sort of hidden cost - unkown or impossible to know, at design time (from what I can tell). This is why I asked for clarification. I've never heard of it and it just sounds like an excuse. By that definition, wouldn't the framework itself be an "accidental complexity" rather than the "considered and acceptable" complexity of a tried and tested template for design? Maybe I'm just getting too hung up on "accidental" and what it implies.
  10. What is "accidental complexity"? This sounds like an excuse given to management.
  11. It's a dataflow language with some functional and OO features. One of these is not like the others and you'll notice "state" is never mentioned in video.
  12. I think I should point out the terminology here. 2FA is a method of authentication (are you who you say you are). Oauth is a method of authorisation (do you have permission). For the latter, authentication is achieved by a 3rd party and log-in credentials are never sent to the service requiring permission, rather, the service requests permission from the third party that has already assertained your identity-outsourced authentication. On the surface it would seem OAuth is what you require but there is a caveat. Most systems around today are targeted towards gaining permission for an application to access a service. What would happen with OAuth is that you would add your application to the white list and the Adminstrator wouldn't have to press OK for your application at all. In fact. The service would think you are the Adminstrator. I'm not sure that is what you want either. Ignoring security for now... What I think you are asking is just to have an entry in a database somewhere with a request and the Adminstrator updates the database with permission. So at the point where the Operator wants to proceed, the application put a request to the server which searches to see if the request already exists or inserts a new request in the database. The Adminstrator then sends a request to the server to allow or deny the permission and the server updates the database with the permission flag. The application then sends the request again and the server checks the database entry to see if the request was allowed. I'm obviously glossing over a lot here because you will have noticed that it requires the Adminstrator to know there was a request and the application to know the Adminstrator responded. But I think this is basically what you are asking for. No?
  13. Even by that definition, learning the API interface may be the cost in a larger Venn diagram but then the API *is* the functionality and code to support that interface is still the cost (code size, memory size, load times, execution times, compile times etc). The over-arching benefit for an API is simplification of complexity and complexity always has a cost. If you are lucky, the underlying complexity cost grows linearly as the API grows. If you are unlucky it is exponential. At some point it becomes unusable either because the underlying code is too complex to warrant the cost (e.g. labview build times for classes) or the underlying code is unmanageable/unmaintainable often with side effects (e.g. the "God" Class). So I still maintain that the API is a benefit (that being reducing interface complexity and also reducing the learning required to achieve a goal) and, the underlying code is the cost of that benefit ... even from the consumers point of view. The ancillory benefits of an API are reuse and parallelism which can alleviate the consumers' project cost but are not guaranteed for any and all APIs and is dependent on the underlying code; usually by adding complexity (thread safety as an example).
  14. It's not so much "shiny web applications" (I just don't have the artistic flair for such things). It's more to do with having cross-platform, internationalised interfaces - which LabVIEW really sucks at. I don't know about you but even with trivial applications I seem to spend 70% of my time with UI property nodes just getting it to behave properly. I can completely bypass all that by separating the UI from the code, and DB/Websockets does that nicely with the added bonus of UTF8 support in the UI.
  15. Nah. Data to DB or websockets then browser interfaces (Javascript). This is what I do with LabVIEW mostly because, lets face it, the LabVIEW UI is no Gigi Hadid either. Once you go that route. It doesn't matter what language you use on the back-end (or which machine it's running on) and if you look at full time T&M jobs in the UK. They are pretty much all Python with Jenkins experience running on Linux. The UK LabVIEW market has been reduced mainly to turn-key automation and then usually only where they already have a historic LabVIEW investment. But we diverge...
  16. Indeed. The "mount your friends" approach is always problematic and I'm probably more eager than you for native solutions (my aversion to intermediary DLLs for example). I'm fortunate that I'm usually in the position to dictate the database to use and would choose MySQL over SQL Server every time. That will only get worse with NXG. IMO, this is yet another reason to choose Python over LabVIEW for Test and Measurement.
  17. NTLM is OK and I expect I could get the SSL/TLS with a little effort (at worst, using memory BIOs and hand-cranking it through). The rest, though.... Thanks for the info, very interesting. It strikes me to be much more than a few weeks concerted effort to just get a base-line native LabVIEW implementation (not a learning execrise/fun project). I doubt anyone would be prepared to pay for it when there are other off-the-shelf solutions for SQL (MySQL. MariaDB) that are already available in LabVIEW - It looks far more hassle than it's worth just to get something on RT. What are your thoughts on Linux ODBC?
  18. You can use TCP for MySQL but for SQL Server I don't know of any non ODBC LabVIEW solution since the backend protocol is proprietry. What you can do is install the SQL Server command line tools for Linux then use the LabVIEW Exec VI to execute "sqlcmd".
  19. Indeed. But I was asking who is the maintainer? Who is it that vets the code, makes the OpenG package for VIPM then uploads it to the Tools Network? How does the VIPM OpenG package get up issued with Rolfs (or anyone elses) code? It used to be JGCode but he went a long time ago.
  20. Who would that be? The maintainer used to be JGCode but when Rolf wanted to update his Zip library, he was pretty much left to do it himself and make a spur. Who is the maintainer of the distribution and where are contributors expected to go with updates?
  21. I looked at it briefly a while ago and came to the conclusion that it is really a stateful HTTP protocol framework. The underlying cryptographics are very simple (in 2.0) but there are a lot of HTTP states that are different for each method (6 methods in total with varying privlieges, IIRC). So you have to identify which method is being used then have an application go through the the appropriate HTTP process with various redirects. This means that a complete LabVIEW library could be quite unweildy and confusing rather than just using basic HTTP GET/POST in an application to achieve the one instance you are interested in-especially as you may have to use another 3rd party JSON library for responses as the NI one is useless. It's not difficult to create the HTTP messages, it's just the process logic is cumbersome. You can get away with a couple of HTTP POSTS and string stripping if you want quick and dirty but for proper and secure, you need the full stateful operation.
  22. To add to this. This document states that the maximum throughput of the bus is dependent on the number of links, each being 250MB. So a 16x slot shoud be capable of about 4GB/s. However it goes on to state that this is not the limiting factor for most NI products.
  23. IMO. For the API he has the cost and benefits reversed. The interface is the benefit. The hidden functionality to achieve the interface is the cost.
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.