Jump to content

All Activity

This stream auto-updates     

  1. Today
  2. Well, things have morphed a little and I may have been using some terms a bit freely. But for now I've been able to replicate the steps used in Google Authenticator for producing HMAC-based One-time Password algorithm (HOTP) with the help of this article as a starting point. There's still plenty to work through, but I was unable to find this available for LabVIEW anywhere. Perhaps I may be mistaken and there are some more fully developed libraries out there. I would love to see those! The workflow is basically that an operator needs to get approval from an administrator to proceed with the sequence. Well, needs to be forced to get approval. A TFA implementation along the lines of what Shaun mentioned that could trigger the authentication process by an HTTP message to an existing service might be a way to use an authenticator app without a code. Obviously I'm no web developer. E.g. attempt to log onto a dummy google account with TFA set up and refuse to proceed until the service says that the authentication was successful. This would be dependent upon the pre-configured app on the supervisor's mobile device acknowledging the log-in, then LV would log back out. Perhaps the wrong hammer for this screw?
  3. Yesterday
  4. Just watched this presentation by Richard Feldman called: Why Isn't Functional Programming the Norm. As I was watching this, many ideas came to mind about how LabVIEW stacks up in various areas of the presentation. I wanted to hear what the community thinks about this. We can all agree that LabVIEW is NOT a popular language (as defined in the video) and it probably will not end up on any presentation as the one in this video (I desire for this to change though). However, I think the discussion in the community about FP vs OO is currently taking place. I know people that do not use OO in LabVIEW and many that swear by it. So I think this is a fitting discussion. However, the core question of the presentation as put by Richard is "Does OO features make a language popular?" His argument is NO. I don't think OO by itself will make LabVIEW popular, but where does LabVIEW end up on the reasons for popularity as presented? Or better yet, what can make LabVIEW more popular? Is that something that anyone should care about?
  5. I think that we are saying pretty similar things. The only reason you would ever chose to consume a package is that the API hides complexity in a way that simplifies your application. If the cost of consuming the API is high then you probably will decide not to consume it. If the cost of the implementation behind the API is high you definitely will decide not to consume it (because what is the point?). That said I suppose I should update the slides to explicitly show that cost/benefit is a sliding scale, something with low cost probably has high benefit, and something with high cost probably has low benefit, and that is true for both the API and the functionality behind it.
  6. So Stream is a mediator-ish message/data bus with transport abstraction, so two parts of the code can publish/subscribe to data/message by type and name and not be concerned with how the data/message moves. MVA takes what Stream does and uses it to extend the Actor Framework, while also building in extension points for Views, View Management, Models, and a ViewModel (so MVVM as an extension of the AF with a built in message/data bus to help decouple Views from Models). Over the past two and a half years we have only used MVA for project code, so it is under active development/maintenance while Stream is not and has not been used in project code. I am working on ramping back up with the blogging, been too busy with business lately unfortunately, and MVA will be featured more prominently in the new blog entries.
  7. Even by that definition, learning the API interface may be the cost in a larger Venn diagram but then the API *is* the functionality and code to support that interface is still the cost (code size, memory size, load times, execution times, compile times etc). The over-arching benefit for an API is simplification of complexity and complexity always has a cost. If you are lucky, the underlying complexity cost grows linearly as the API grows. If you are unlucky it is exponential. At some point it becomes unusable either because the underlying code is too complex to warrant the cost (e.g. labview build times for classes) or the underlying code is unmanageable/unmaintainable often with side effects (e.g. the "God" Class). So I still maintain that the API is a benefit (that being reducing interface complexity and also reducing the learning required to achieve a goal) and, the underlying code is the cost of that benefit ... even from the consumers point of view. The ancillory benefits of an API are reuse and parallelism which can alleviate the consumers' project cost but are not guaranteed for any and all APIs and is dependent on the underlying code; usually by adding complexity (thread safety as an example).
  8. How does MVA relate to your STREAM framework that you've blogged about?
  9. Most of our company (if they're using any standard at all) for LabVIEW utilize templates quite similar to DQMH, that was started over a decade ago and refined/upgraded through the years. Of course, unlike the DQMH we don't have the level of documentation, scripting, and unit testing that the DQMH comes with... I did write a few examples though! 😏
  10. So what I was going for was that the API has a cost associated with learning and using it as a consumer of the component, while the functionality encapsulated within the component is the benefit that you get when you pay the cost to consume the API. As a consumer of the component you don't pay a cost for the functionality hidden behind the API, I suppose that if you find yourself paying a cost for the functionality hidden behind the component you would probably stop using the component because it isn't adding any value.
  11. As Michael mentioned above we (Composed Systems) primarily use the MVA Framework (which is an extension of the Actor Framework). One note is that MVA is our separation of concerns focused framework that we use for decoupling of UI and Business Logic and the messaging between them, but we also use other frameworks that are not messaging frameworks for other aspects of development. After a few years of building up our tool chain we have MVA for messaging, a sequencing framework (Test Executive), and an event logging framework. We use each of these frameworks in each of our projects and extend each for application specific needs. I had lunch with a customer last week that we have been working with for coming up on two years, and he was saying that from his point of view he doesn't program in LabVIEW anymore, he just uses our tools (frameworks). I think this is great, I'd be curious if other people agree or are horrified by it.
  12. It's not so much "shiny web applications" (I just don't have the artistic flair for such things). It's more to do with having cross-platform, internationalised interfaces - which LabVIEW really sucks at. I don't know about you but even with trivial applications I seem to spend 70% of my time with UI property nodes just getting it to behave properly. I can completely bypass all that by separating the UI from the code, and DB/Websockets does that nicely with the added bonus of UTF8 support in the UI.
  13. I don't actually need the large memory support at this point, but feel it is getting close to calling time on 32 bit applications. I have been dabbling with LV2019 64 and so far it does what I need but admittedly I have not needed to interface with any hardware yet apart from a GigE camera.
  14. I've been looking at the GCentral site and visited the Package Index page. While I find it a good initiative I see here the same problem that makes me loath browsing the NI site for products. I'm interested in the list of packages mostly yet half of the screen is used up by the GCentral logo and lots and lots of whitespace. I may be a dynosaur in terms of modern computer technology and not understand the finesse of modern web user interface design, but a site like that simply does not make me want to use it! Maybe this design will be beneficial to me in 10 years from now when my eyesight has detoriated so much that I won't see small print anymore but wait, the text in the actual list is still pretty small, so that won't help at all. It's also not because of the much acclaimed fluent design. The size of the actual screen stays statically the same no matter how I resize the browser window. This kind of web interfaces makes me wonder where we are all heading to. Design above functionality seems to be the driving force everywhere.
  15. While I can understand Jim's concerns I also think that the current state of OpenG is pretty much an eternal stasis, otherwise known as death. Considering that, any activity to revive the community effort, either under the umbrella of OpenG, G-Central or any other name you want is definitely welcome. And while I'm willing to work on such activities, organizing it has never been my strong point. I don't like politics, which is an integral part of organizing something like this. There are other problems with initiatives like this: People do usually need a job that pays the bills. They also have a life besides computers. And they frequently move on or loose motivation to work on such an initiative. One reason being that there is so much work to do and while quite a few people want to use it, there are very few wanting to contribute to it. Those who want to contribute often prefer to do it in their own way rather than help in an existing project. It's all unfortunate but very human.
  16. Our own, LCOD based statemachine and unnamed but packaged queue. Covers 99% of our needs, what others call actors, we tend to call services and are only employed when background stuff is needed. Each service will be based on the template above.
  17. For what I and our company does it seems to be more than adequate, but then we don't focus on shiny web applications which sport the latest craze that changes every other year. We build test and manufacturing system where the UI is just a means to control the complex system not the means to its own end. In fact shiny, flashy user interfaces rather distract the operator from what he is meant to do so they are usually very sober and simple. For this the LabVIEW widgets are mostly more than enough and the way of creating a graphical user interface that simple works is still mostly unmatched in any other programming environment that I know.
  18. The problem is that NI seems to get out of a lot of hardware in recent years. Most Vision hardware has been discontinued or at least abandoned (no new products released and technical support on the still sold products is definitely sub-par compared to what it used to be in the past). NI Motion is completely discontinued (which is a smaller loss as it had its problems and NI never was fully commited to compete against companies like MKS/Newport and similar in that area). NI DAQ hasn't the same focus it used to have. NI clearly has set its targets on other areas and for some part moved on for some time already. That may be good news for their stock holders, but not so great news for their existing user base.
  19. Do you really need/want to support 32 bit? I mean... if it's for vision, you probably use a large size of memory, no? I wish I could offer my collaboration but calling external code is not really my cup of tea, so I'd probably be mostly useless anyway. If you are interested in getting in touch with tensorflow users to have their feedback, get in touch with me via PM.
  20. Last week
  21. Not out of the box. 32 bit LabVIEW interfaces to 32 bit Python. So you would need to have some 32 bit Python to 64 bit Tensor Flow remoting interface. If 64 bit LabVIEW is a requirement the lowest possible version is 2009 on Windows and around 2014 on the Mac and Linux platforms.
  22. My advice is to "look into CXP", I never got round to trying the bitflow HW + driver which they claim is fully compatible with LabVIEW, CXP seems to be the future of industrial high res / high speed vision. So if you're looking to build a vision system for the next 10 years, I think you should test out bitflow's CXP option. I'm a bit disappointed with NI's "wait n see" approach with CXP, last I asked the answer was "we don't think the ROI is big enough". Which is a fair point but leaves NI Vision with very limited options. CXP open a lot of options in term of cameras. Good luck to you, and - again - if you do test bitflow cxp framegrabber in LabVIEW i'd love to hear how it works.
  23. @Rolf Kalbermatter my brain was clearly running badly when I typed that we needed LV2018, what I actually meant to say was the 64 bit LV was required. 🤐 I say this as it is my understanding the Tensorflow DLL is strictly 64 bit. Do you suppose it would be possible to use 32 bit LV if going via the Python route?
  24. I played around with the continuous and finite examples using a 9205 (mine is in a 9178 chassis, but that shouldn't matter), and analog start triggers configured on the same AI as the task's first channel seem to work for RSE continuous measurements, but not differential measurements. They also seem to work for Differential finite measurement, but trigger off the channel configured, not the differential pair. Continuous differential measurements exhibit the same behavior in NI's examples as in your code. A continuous measurement doesn't make much sense for your application, as you are only measuring once after the trigger. Once you do get triggering working, you may be able to use DAQmx connect terminals to route the AnalogComparisonEvent (you can find this if you enable advanced terminals in the I/O Name Filtering) to another card, which could then be used to drive circuits without any software timing involvement. If you change to the 9178 or 9179 chassis, one of these terminals could be the BNC connections on the chassis itself.
  25. If you want to take the Python route then of course. As far as the Call Library Node is concerned there is virtually no difference since at least LabVIEW 2009 and even before that the only real difference from 8.0 on onwards to 2009 is the automatic support for 32bit and 64 bit DLL interfacing at least if it is about pointers being passed directly as parameters. Once you deal with pointers inside structures you have to either create a wrapper DLL anyhow or deal with conditional code compilation on the LabVIEW diagram for the different bitnesses.
  26. I have a requirement that I thought would be SIMPLE, but can't get it to work. I have a 9205 card in a little 9174 cDAQ USB chassis. My *intended* behavior is to wait (block) at the DAQmx Trigger/Start Analog Edge on, say channel ai1, until I get a falling edge thru, say, -0.050V. So I have a little vi (that contains 2 parallel loops) that I want to sit & wait for the trigger to be satisifed. I'm doing "routine" voltage measurements in another AI loop on a different channel. I want this vi to run separately from my "routine" voltage measurements because I want the app to respond "instantly" to input voltage exceeding a limit to prevent expensive damage to load cells. I was afraid that if I used either Finite or Continuous sampling to "catch" an excessive voltage, I might miss it while I'm doing something else. Yes, yes, a cRIO real-time setup would be better for this, but this is a very cost-sensitive task... I just want to "Arm & Forget" this process until it gets triggered, whereupon it fires an event at me. SO... I'm also reading the same voltage on channel ai0 for regular-ole voltage measurements, and just jumpering them together. I did this because I read somewhere that you can't use the same channel for multiple DAQ tasks - I *thought* I would need to set up the tasks differently. {but now that think about it, the setups can be the same...}. I've set up the DAQmx task the same as shipping examples and lots of posts I've seen. I'm supplying a nice clean DC voltage to a 9205 card using a high quality HP variable power supply. Using NI-MAX, I've verified that my 9174 chassis & 9205 are working properly. THE PROBLEM - When I run it, the vi just sails right through to the end, with no error, and an empty data array out. No matter WHAT crazy voltage I give the "DAQmx Trigger.vi" (set up for Start Analog Edge), it never waits for the trigger to be satisfied, just breezes on through as if it weren't there. If I set the Sample Clock for "Finite Samples", the DAQmx Read fails with timeout - makes sense, since the trigger wasn't satisfied. What could I possibly be doing wrong with such a simple task??????? So my fundamental misunderstanding still vexes me - does the DAQmx Trigger vi not block and wait for the trigger condition to be satisfied, like the instructions state - "Configures the task to start acquiring or generating samples when an analog signal crosses the level you specify"? I stripped my requirement down to the bare essentials - see the 1st snippet, the 2nd is my actual vi. Any ideas, anybody?
  27. Nah. Data to DB or websockets then browser interfaces (Javascript). This is what I do with LabVIEW mostly because, lets face it, the LabVIEW UI is no Gigi Hadid either. Once you go that route. It doesn't matter what language you use on the back-end (or which machine it's running on) and if you look at full time T&M jobs in the UK. They are pretty much all Python with Jenkins experience running on Linux. The UK LabVIEW market has been reduced mainly to turn-key automation and then usually only where they already have a historic LabVIEW investment. But we diverge...
  28. Thanks JKSH I echo your sentiment. Versioning is always going to be a problem, but just recnelty Tensorflow hit 2.0 so that was what I was planning on supporting as the minimum version. Going down the python route is also interesting but a bit fraught. Probably due to my inexperience with python and it's tool chain I spent the better part of a month just trying to follow myriad tutorials online to get python and tensorflow working properly on my PC, with very little to show for my results. I do think it would be reasonable to say that a toolkit such as this requires LV 2018 or greater.
  1. Load more activity
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.