Jump to content

ShaunR

Members
  • Posts

    4,883
  • Joined

  • Days Won

    297

Everything posted by ShaunR

  1. I have an aversion to PPLs since they are non-reversible and some people have had issues with them (that's why I prefer zipped LLBs). I tend to use directory structures quite a lot for dynamic loading, so zip files are an easy way to mirror the structure as well as compressing and including significant numbers of VIs/LLBs. At the end of the day, it's not really important. The important bit is that you have a plug-in (or in my case, modular) architecture.
  2. This is exactly what dispatcher achieves (all of the above). But your client (or subscriber in dispatcher) still needs the software bit to show your dialogue which is what your first thing and the following is about. Anyhoo. Zip it up (even an entire directory of VIs if you want-zipping maintains the directory structure). Send it across, unzip it and reload (LLBs are good for this). It's a standard plugin from that point on. Of course. If your current software is monolithic, then then you will need to re-factor it to be a plugin architecture. We are lucky in LabVIEW that LV doesn't lock source files so we can read/write/modify them whilst running.
  3. So now you are talking about something completely different. These things are infrastructure topologies. (more like Dispatcher).and don't have a lot to do with showing prompts remotely et. al. It almost sounds (without a knowing exactly what you have at the moment) like you are trying to fit a square peg in a round hole. Distributed systems have to be designed as distributed systems and if your current software isn't (distributed) then just hacking at it will not be very fruitful IMHO.
  4. Option 3. The server has everything labview and the client is a browser with javascript prompts and interfaces. You wouldn't even need a labview run-time on the clients.
  5. Yup. I've seen this not only with dynamic formatting but also in normal usage. It doesn't seem to happen with the silver indicator (but does with the silver frame-less). Have you tried to defer the update?
  6. ShaunR

    SSH needed

    Well. Some good news and some not so good news. But the "not so good news" is only a little bit not so good First the good news. I couldn't find MJEs interactive DOS thingy, but I found Rolfs pipes library (I don't think it ever got out of alpha, but it works). You can use this to run Plink.exe interactively instead of sending a command file (just remove the -m switch). I've just used it on my VPS and it was fine apart from some dodgy control characters that would need to be filtered, but I started an interactive session and was listing directories and sending commands and reading output.. Now for the not so good news: Rolf wrapped the windows pipes in a DLL that is 32 bit. So flash a bit of leg and maybe post some funny cat images and he might compile a 64 bit version for you If Rolf does give you a 64 bit DLL, then I'll post an usage example to help you on your way.
  7. ShaunR

    SSH needed

    This is not correct. SSH does encrypt the passwords etc, but it is still a much better idea to use public key authentication and turn off password access as it is impossible to brute force.
  8. ShaunR

    SSH needed

    Nothing wrong with a guy in his basement offering software Difficult for defence companies however, that have preferred suppliers and require dedicated support channels. The way I usually get round this sort of thing is to offer a short term contract, that way it guarantees the contractor a wage for their work (some people ask for stuff then say naah, don't need it now) and it is an acceptable business interaction from the companies point of view. Yes. But it is even easier than that. Just set everything up as a profile in PuTTY (being a point a clicky person I find a GUI is much better) then, when you need it, just run putty.exe -load myprofile (with system exec). Well. Telnet is a protocol but it is also used to refer to the client. Windows 7 doesn't come with a telnet client installed as default any more (which is called "telnet" and has to be installed with add/remove programs) but the term is used interchangeably.I don't find it very useful to get into semantic arguments with IT people who tend to be very anal, no sense of humour and completely unreasonable when it comes the their network. Just because you re not running the windows telnet client and are using labview, it still means you are running a telnet client, if they want to be difficult. I would speak to the person that issued the diktat and ask if it's acceptable to use the telnet protocol over SSH as LabVIEW has no native support for the SSH protocol and no 3rd parties have an acceptable solution. If they are only referring to removing the telnet client, then this is fine and would work. Be careful though. If they also say you need to remove the telnet server from the nix box, you will be stuffed. So far. I think the best of the evils (if puTTY doesn't do it) is the python approach since jzoller knows of a python SSH library. If you have plenty of C coders that need to justify their existence you could get them to port OpenSSH to windows (and compile it for 64 bit labVIEW). LabVIEW sucks at anything that has encryption/compression in it.
  9. Had to share some humour-had me in stitches all day DYAC My favourite is the CAPSLOCK
  10. ShaunR

    SSH needed

    Yes. This isn't easy in LabVIEW as the only interface you have is the on-shot Shell Command. MJE did put together an interactive CMD session that could be used for this purpose (not sure where it is though). However. You do not necessarily need to need to keep the link open to be able to achieve a pseudo interactive session but there is probably a simpler way and it's what putty was really designed for. By far the easiest solution for you would be to tunnel your telnet through SSH (you can tunnel anything through SSH ). Create a local proxy" that your telnet client connects to on your local machine (127.0.0.1) and it will be completely transparent, to your telnet client, but secure. You wouldn't even have to write any code As a slight tangent. I also looked at that software package you mentioned earlier a while back. It looked great but relies on a DLL and is just a thin wrapper around that (not cross platform, so no good to me). You could get them to compile the DLL for 64 bit so that you could use it (or you could use LV 32 bit). The long and the short of it is that there is no native SSH library support on windows (in any language, that I know of-maybe Rolf has something) and the only people that really care about it are linux bods since it is only really of any use to connect to linux boxes. Most windows users avoid linux like the plague and get by, when they need it, by tunneling with PuTTy. Windows users use VNC or Remote desktop rather than SSH to talk to remote computers. I count myself in this category too and it annoys the hell out of linux bods when I say to them that all flavours of desktop Linux are themed windows 95 with an encrypted DOS Prompt If you want to really annoy them. When they rattle off a command line (because every Linux user has an Eidetic memory and therefore anyone that doesn't is an amoeba to be treated with scorn), just say to them "you lost me when you said 'type in' "
  11. ShaunR

    SSH needed

    What's the issue with putty? I can connect to my VPS and execute commands with LabVIEW without any problems. (Maybe they are hesitant that you don't see anything in the DOS prompt as it is redirected to the Shell Execute) An important point, however...... The examples tend to send the username and password in plain text. Do not do this! Instead you need to create a private and public key key pair (not very easy for point and clicky people, best to get the linux bods to do it for you and give you the private key-make sure they give you one that doesn't require a password) and tell the SSH server on the remote machine (again, linux bods) to only accept that key (or a couple of them if you have multiple users). Then putty will do a secure authenticated login using the keys. This is the command line you will need to execute (note the ppk key file). The -P parameter is the port number which should be non-default for good measure, but you can leave it out while you are getting it to work on the default port. Note: all my files including putty are in a c:tempputty directory
  12. "Set From JSON String.vi", "Name.vi" and "First Char.vi" are not set to re-entrant.Just downloaded the latest version and "Set From JSON String.vi" now seems to be reentrant which cures the blocking. The others are still not re-entrant though.
  13. DLLs cannot be run standalone. You always need a host application. Why do you need a parameter interface between "new and legacy" code? What I was suggesting is you just pass in a filename to process, a filename for output (cmd line parms) and hey presto you get the db that your real application can use. You can put it on your website for people to download if they need it (perhaps bundle it with the main app to begin with) and once their files have been converted, they will no longer require it at all. It never needs clutter your application code-base, rather, a separate project as a bridge to deprecation of the old file format..
  14. A common way of solving this with the minimum effort is to create a conversion tool. It only needs to be created once, then you can forget about it and eventually you will no longer need it. Some even offer the feature as a menu option in the main application which just invokes the converter. If it's easy to spit out your custom format as a delimited text file or a number of files, you can easily import them with the API File tools (2 steps rather than one, but may be a lot easier).
  15. Perhaps I didn't make it clear. I was not suggesting that other languages don't need the run-time. Just that they only need the 1 run-time as opposed to, say, a labview 2011 exe with a 2009 dll which needs two, I believe.
  16. Why not just wait until you come back when the others will have been addressed too?
  17. Well. A DLL is a self contained list of compiled executable functions (they are not equivalent to programs). If you call a function using the CLFN I don't think it has anything to do with any libraries you have placed on the block diagram (scope-wise). As long as the function exists in the DLL and the there is only one DLL with that name, that is all that is required (parameter lists of course need to be considered). Unless you are dynamically loading an external lvlib from the DLL (which is a bit silly), I don't really understand the question. DLLs are meant to make programs modular just as lvlibs are. It tends to be one or the other with lvlibs being native to labview. If you have compiled a lvlib into a DLL, then your exe will use whatever version of the lvlib you compiled the DLL with (your program only knows of the function name and parameters to pass). Replace the V1 DLL with the V2 DLL and your program will not know much is different unless the parameter lists have changed for the function calls. That's the whole point of them-so you can update/modify parts of the code without affecting/recompiling everything else. That said...... There are a couple of caveats that are peculiar to LabVIEW DLLs specifically. Rolf has outlined a bit of it in that LabVIEW DLLs have a huge dependency on the run-time and it's not easy to know what those dependencies are. So you can find yourself a while down the road backed into a corner and installing every version of labview run-time known to man to keep the various bits of your "modular" code that you have developed over several LV versions working and wondering why, this time around, it runs like a slug. You also lose your cross-platform capabilities too! (Cannot create .so or dylibs/frameworks with LabVIEW) My advice is don't use Labview DLLs unless it's for use in another programming language and your options to provide an interface for them are limited. Other languages executables don't have the same dependencies as LabVIEW executables so are less likely to run into version problems between the DLL and the program itself.
  18. It seems to be blocking with arrays. Here's an example to demonstrate. The text file is made up of giant arrays so it makes it much more obvious (takes about .4 mins to execute) I'll take a closer look on Sunday. A bit strapped for time at the moment. Sweet. I'll take a gander later.
  19. Indeed. There is zero error checking in the prototyping beyond what prevented me from achieving the feasability testing(thats one of the reasons why I call it prototyping rather than alpha or beta). Now I have piqued interest, we can start productionising and locking down the robustness. We also need to do the standard checks like ensuring that allocation actually succeeds etc. I'll add some of your changes to the next release. There is no point in adding a first call flag to the pointer block. That would require shift registers in the users code. Although you have added it, you haven't actually used it have you? I think I will pretty much be echoing what others are saying when I say DSCheclPtr isn't even worth being a function. The only thing it seems to check is if it is not-null (pass in "1" and it will pass). Not surprising really, it's the same problem in C.Any pointer checking is probably not going to yield a robust method. The method I described earlier works really well. It doesn't rely on the readers or writers trying to detect pointers. It relies on not being able to deallocate until everything has finished using them. This raises a slightly different issue though. It works great for asynchronous deallocation as long as the readers and writers are able to propagate their state. In a while loop this is fine as an extra iteration is possible so that they can read the registration bit and un-set their active bit. Not so good for the fixed for-loops though as the extra iteration cannot happen if you wait until all have completed their full number of iterations (works ok before then though).
  20. +1. There was no such thing as a run-time error until that came in
  21. Ooooh. I missed this bit. I'm not sure if I am now answering my modified readers list suggestion with the managers bit AND the readers bit or not. So I'l plough ahead with the assumption that this is in response to that scenario (apologies if that's not the case) No. The Read 1 and DeInit are not operating on the same memory location. Therefore there is no requirement for CAS and as only one writer can write to any bit, there is no race condition. Each writes to it's own bit associated with each reader (the regman writes to several locations in the list but it is still the only writer to those locations. The readers each handle their own "active" bit so they each are still the only writer for their bit in the list). The writer only reads both bits for all readers to determine. a) Is this reader to be included in the lowest check (only the registration managers bit is important for this) b) Are there any readers at all (all the regmans bits are false AND all the readers' bits are false)->exit and set "Finished". In this scenario, it is the writer that exits signals everything is OK to kill the pointers (it is monitoring all the bits), not deinit. Deinit unregisters all the readers then waits for the "finished" bit to go high then it deallocates. The"active" bit in each reader is a proxy for the regmans bit in that the regmans bit is read and then the value written to the "active" bit for that reader on exit (no other reads/writes happen after) When the "finished" bit goes high all readers and the writer have already exited and will not be reading/writing to any pointers. I'm hoping to have a knock-up of the registration over the weekend to test the idea and see what I run into (depends how drunk I get Saturday night-hangovers don't get worse as you get older, they just get longer ).
  22. The "ball-of-mud" is the one pattern that rules them all Just don't use it on any of my projects
  23. As it stands. Yes. The pointer cluster is "on-the-wire" and when we deinit we just leave the cluster alone. But it doesn't have to be (just makes it easier to debug). If I were to start on a "Classic LabVIEW" API, I would probably also shove the pointer cluster into memory and you wouldn't need a "reference" wire at all, hell, even a global variable to contain them (no "not allocated" issues then). With the former, I might use the "CheckPointer" call if it wasn't too expensive and it does what I think it does as belt and braces with a check for null pointer on the cluster (if the cluster doesn't exist then neither do the others and vice versa). But if you are looking at classes I thought you would want to manage all that in the class.....somehow. If I put everything into memory and handle all the scenarios, there isn't much point in a class at all apart from making people feel warm and fuzzy about OOP. I think the issue you are probably running into is that you are finding the best means to achieve what you need in a class is a DVR. But you can't use them because of the locking overhead. If you find you are looking to a DVR,, then the structure needs to be via the MM functions as they are basically just a method to provide the same function, but without the DVRs locking overhead. Anything else can "probably" be in the class. I would be "happy for now". I don't think the unallocated pointers are an insurmountable issue and at worst we just need some state flags. I would come back to it on another iteration to see what needs to change and what the performance impact is of a million flags all over the place. I don't think we do (need a test 'n set). The premise of the pattern is mutual exclusion through memory barriers only. As long as you only have one writer to any location then no test and set is necessary. As we have solved the issue of accessing individual elements in arrays without affecting others or locking the rest of the array, all we need to ensure is that write responsibility is well defined (only one writer to a single location or block). The only time a test 'n set would be required is if we couldn't guarantee atomic reads and writes of the individual bits (PPC?). As an aside. Anecdotally, it seems writing all contents to a cluster is an atomic operation in labview and incurs a marginal overhead as opposed to accessing the memory locations independently. The overhead is miniscule in comparison to the extra library calls required to achieve the latter. If this can be confirmed, then it is the fastest way to add atomicity to entire blocks of locations when needed. This is one reason why I use a cluster for the elements in the index array..
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.