Jump to content

ShaunR

Members
  • Posts

    4,883
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. @rolfk Indeed. I've managed to avoid most of the shenanigans and dealing with various possibilities since most of NTLMv2 is taking what the server gives you and signing it. In that respect I only need to convert from LabVIEW strings that the user enters via the FP (computer name, user name and password) rather than having to deal with what the OS gives me via calls. I haven't tried yet, but it looks like the btowc will do what I need unless you can see a bear trap in there somewhere. With Mac support. Well. No-one has bought a Mac licence so I only care for completeness and will drop it is it looks like to much hassle. The toolkit used to support Mac but I quietly removed it because of the lack of being able to apply the NI licence. So it still works, but I don't offer it With Linux, however it looks straightforward and is a must for embedded platforms. I think I might just need to update my ASCIIToUTF16 conversion primitive with a conditional disable to use the btowc. Can you confirm that is a good way forward?
  2. You are looking for the tangent to an inflection point.(second order derivative d2x/d2t=0)
  3. The only rules in software are those imposed by the compiler Everything else is just anti foot-shooting aphorisms and masochism. .
  4. There is nothing to stop you sending the DVR reference as a message. DVRs are storage and messages are pseudo-events. There are no rules (except self imposed ones) that say the data has to be an integral part of a message. Sending a DVR reference (or queue name etc) is like the postman leaving a note telling you to pick the parcel up at the depot because it won't fit through the letterbox.
  5. Even so. isn't that what iconv does? I'm just trying to get a feel for the equivalencies and trying to avoid bear traps because Linux isn't my native habitat.
  6. Is there a good reason to use a third party library like that rather than btowc which is in glibc?
  7. I'm adding support for NTLMv2 authentication and Hashes to the Encryption Compendium for LabVIEW. I have implemented the NTLM SSP protocol in native LabVIEW (apart from one thing) and it's all working great under Windows, Now I am looking to make sure it works on the other platforms still. The "one thing" is the Ascii to UTF16 conversion since the protocol uses unicode. With the older protocols (NTLMv1 etc) that is not an issue since the negotiation can tell the server to use ASCII strings instead of Unicode. However. NTLMv2 requires unicode strings to create the hash in the specifications so there is no negotiating it away. (Note: There is no need to display anything so the bytes just need converting for calculation from a LabVIEW string to a u8 byte array representing the unicode equivalent. No changes to LabVIEW indicators or ini-files is required) Under windows, the conversion from ASCII to Unicode is via calls to the OS (kernel32.dll). So am I looking at iconv, mbsrtowcs or something else to achieve the same on Linux and Mac? .
  8. Such a drama queen I think most have weighed in and, as per usual, the applied programmers do still use them and the pure programmers don't. If you can wrap it in a polymorphic VI. You'll not only simplify the connector, you'll also get the adapt to type for the different usages.You don't have to split out each method into a new vi of the polymorph but you may have to split the typedef into a couple that make sense for the connectors. For example. If you have a Hash AE that has SHA and HMAC. Then two VIs for the polymorph, one with a typedef of SHA1, SHA256 and SHA512 then another with MD4, MD5, SHA etc
  9. Oh.I thought you had already obtained the text since you stated it looks like a series of question marks (so just needed to convert it) LabVIEW is shipped with some automation examples. The one below (from the examples) interacts with Excel but the principle is the same. I couldn't find any examples of Ms Word without the Report Toolkit because most interaction with MS products is generally the other way - writing reports. I don't have M$ products installed to knock up a quick example, unfortunately.
  10. They are not "lost". There is just no mapping in the current code page to render them While LabVIEW doesn't officially support unicode there are things unofficially that you can do to display and manipulate unicode strings.
  11. Well. there is no "Total bytes downloaded" emitted by the downloaders (or total size, for that matter) but if there was then throttling or congestion would make you think the download had finished so no. That won't do. A queue isn't as straight forward as it appears on the surface either, Since the MB/s is usually driven by incoming data (receive n bytes in delta N ms and extrapolate bytes/s) rather than seeing how many bytes arrive in 1 second and then emiiting an event. (That's why we need a rolling average) Faster downloads will skew your average since they will be putting more entries in your queue. I'm not sure why Dispatcher is thrown in here. That isn't a messaging framework or a memory accessor or really anything close to what we are discussing. Dispatcher is some turbocharged TCPIP primitives for a specific purpose (pub/sub). It has more in common with network streams than a messaging framework. It is meant to be a component in your system not the system itself. I suppose when an actor is just code that does something and messaging is just information flow then even a simple API becomes a "Messaging Framework". I don't subscribe to the philosophy of tortured vocabulary, though. If you are looking for my "Messaging Framework" then you need to look at the VIM Demo where it is one VI @ ~700KB. The merits of each messaging approach is irrelevant in his discussion though. We should instead concentrate on the OPs question.
  12. That won't give you the snapshot. Take an extreme example. You are launching download "actors". Each downloader has a MB/s that is emits via your notifiers and once the download is completed the downloader closes itself. You want to know the rolling 10 second average of throughput on your network card. So you launch 1000 downloads. Now you have the global / concurrent snapshot problem. At any one time you need to know how many are still open so you can for loop through them and obtain the average. As you for loop through them, on each iteration, they will be updating their notifiers before you get to include them in your calculation and some will probably close leaving their last value in there before you've finished looping - therefore including them when they shouldn't be. This is solvable with some sort of protected global storage (a DB, an AE, a DVR, a Go Pro) but not by pure messaging alone. But it is not learning "messaging". It is learning someone's particular flavour of framework Your's, AQs, Michaels, Daklus. Everyone and his dog has had a go I was rather amused when watching the NI Systems Groups video about their DCAF thing. Where was the Actor Framework?. If I could ban one word from the dictionary it would be "Framework". Every single framework I have seen in LabVIEW so far has been like this And you are still expected to build the bike
  13. An action engine to do the averaging over N channels is about 2 minutes to produce one VI (~2k), that can be explained to a novice in 30 seconds regardless of the architecture you use. A class with a DVR is about 5 minutes, 3 or 4 Vis (about 30K) and you have to promise there will be only one (but that's probably OK for Neil) and could be explained to a novice in 30 secs IF he was OK with classes to begin with. What's your overhead and how long until a novice would know it "very well"?
  14. Then I would suggest staying away from any of the class based messaging systems that have proliferated. You have to create a class (and all the overrides as well as inherit from their particular fetish for structure) for every damned message - which is comical. This is why I use simple strings for messaging..
  15. Because of aggregation. The goal is to average a number of values all being acquired asynchronously. There needs to be a way to snapshot the multiple values at an instance in time. If you use messaging then by the time you messaged each subsystem and got a response the values are not at the same instance in time. If you use events, then the event structure only operates sequentially so you have a similar problem. The AE makes this trivial. Other solution require all sorts of hoops.
  16. If you want to go all experimental...... There are "Tags" too which are global. IIRC I suppose they could be leveraged for this but I've never really looked at them Another thing that I used to do before databases was to use an INI file to abuse the OS cache. Most people don't realise that reading and writing to an ini file directly is almost as fast as accessing memory once cached so there is no need to keep a lookup table in memory. (you can do the same with memapped files to share between executables too). This also has the side effect of giiving persistence (great for resuming downloads that the user task-kills )
  17. It's specmanship by a designer that never uses the end result. If you read the docs, it talks about clusters and the behaviour is the common denominator. It makes it a little bit useful for clusters while pretty much useless for anything else.
  18. That just takes us back to: ....and you seem to now agree. Lets just keep it simple and talk about memory accessors.I don't think Neil wants to refactor all the code into several weeks worth of architectural redesign just to replace a very small component - rather a drop-in replacement for a standard (but possibly outdated) module type. (Pin for pin compatible,if possible).The problem is that for simplicity, efficacy and ease of debugging; there aren't many alternatives if you take DVRs off the table..
  19. Everything's an airplane with enough thrust, right? Generalising to nebulous "actors" (which is just a hand-wave and means "insert your meaning here") and "objects" (which is another hand-wave meaning "describe something here") isn't the frame of the discussion. If that's the route, then I will generalise further to subsystems, services and processes since I don't have "actors" and "objects" - but that shouldn't matter, right? Just for reference were were at: and..... That's where we got sidetracked into the the LabVIEW implementation specifics of globals, wires and so on (where I lost you with my By Val/Ref.). Then you got me back with objects and actors, causing me to pass out through lack of oxygen
  20. I think now I know where you are trying to get to, Indirection, of which "by reference" is an example. But the generalisation is inappropriate. AEs (and DVRs for that matter) are just memory accessors with some safeguards and maybe some logic. To conflate them with processes, subsystems and services where state rules supreme is a fallacy. DVRs and AEs are stateless accessors, nothing more. (That's not to say people don't try to stuff complex state-machines in AEs though )
  21. Well. 21.3xxxx is the maximum over-range of a 20mA current loop, IIRC. I suspect you told max it is a current loop rather than just applying a scaling factor to your voltage.
  22. Well. You wouldn't get that with an AE if you have the usual "First Run" check or have used the feedback node instead of shift registers internally. However. With a DVR, no (the ref goes invalid when idle). With a DB. Yes. However. This is where I usually bring up the Suspend When Called
  23. I really don't know where you are trying to get to now. This is starting to look like an "Anything is an airplane given enough thrust." kind of argument.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.