Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Posts posted by ShaunR

  1. But it is time to move on and try new and (hopefully) better things. 

     

    I'm more of a "If it's not broken; don't fix it" sort of person :) (Like waiting until LV 2013 SP1 is released :P

     

    I think the main difference in your approach is that it will be a push rather than the more common pull distribution. If this is the case. Then security may have to be a greater consideration depending on your environment and exposure.

  2. At this point I am thinking of using a PPL to store the prompt and it's support files.  That way, I should be able to just send the PPL across.

     

    On a side note, I am playing around with reading a VI file into LabVIEW and then writing it back to disk in a new location.  (I will need to do this because I need to send the file over the network, regardless if it is a VI, PPL or ZIP.)

    For some reason, LabVIEW does not want to open my new copy of the VI, even though it is an exact byte by byte duplicate of the original (verified by comparing the file data in LV and with WinDiff).  So that is not good.

    <edit: nevermind. forgot to turn of convert EOL.  works fine now> 

     

    I have an aversion to PPLs since they are non-reversible and some people have had issues with them (that's why I prefer zipped LLBs). I tend to use directory structures quite a lot for dynamic loading, so zip files are an easy way to mirror the structure as well as compressing and including significant numbers of VIs/LLBs. At the end of the day, it's not really important. The important bit is that you have a plug-in (or in my case, modular) architecture.

  3. By splitting them, the client application can view sessions from multiple servers at the same time.  And multiple clients can view sessions from the same server.  I can have one server for everything or I can have N servers as demand increases.  And I can have N clients using that same server pool.  I can also have clients view sessions based on criteria besides what server they are executing one.

    This is exactly what dispatcher achieves (all of the above). But your client (or subscriber in dispatcher) still needs the software bit to show your dialogue which is what your first thing and the following is about.

    Anyhoo.

     

    With the current system, you can only see the sessions on the server you are viewing since the UI is hosted by the same application that executes the plugins.

    The best analogy I can think of is a web server and browser.  The current system would be equivalent to having the web server and browser be a single EXE.  To view the web page, you would have to log onto the machine running this exe and while you were using it, no one else could.  So, to view a web page, you would need to give each user their own server/browser application.  Also imagine that while the page is being viewed by one exe, it cannot be viewed by another at the same time.  The new architecture separates these functions but the server still needs to serve up some UI information at run-time that the client cannot have prior knowledge of.  Ideally I would prefer to have the flexibility of having the UI information be a fully functional VI.  That way I am not limited in the future to what my prompts can look like or do.  But, if that is impossible I might have to make a generic prompt and simply push configuration info from the server to the client.

    I have tried to keep the problem description generic because the point of the thread was to find a way to dynamically push a VI to another machine and run it.  That seems like something that would have more applications than just my current design.

     

    So, not concerning yourself with my motivation, does anyone know a good way to send a VI (actually a hierarchy of VIs) from one LabVIEW EXE to another and execute it at run-time?

     

    Zip it up (even an entire directory of VIs if you want-zipping maintains the directory structure). Send it across, unzip it and reload (LLBs are good for this). It's a standard plugin from that point on. Of course. If your current software is monolithic, then then you will need to re-factor it to be a plugin architecture. We are lucky in LabVIEW that LV doesn't lock source files so we can read/write/modify them whilst running.

  4. Unfortunately that is not an option for me.  The client needs a lot of custom code.  It needs to manage multiple servers at multiple sites, shared views and access control for multiple clients viewing the same server session, database recall of historical output, etc...  The client is actually more complex than the server and has just as much parallelism if not more than the server.  So, LabVIEW is a natural fit.  I would need a whole team of Javascript devs to attempt to reproduce it.

     

    But thanks for the idea anyways.

     

    So now you are talking about something completely different. These things are infrastructure topologies. (more like Dispatcher).and don't have a lot to do with showing prompts remotely et. al. It almost sounds (without a knowing exactly what you have at the moment) like you are trying to fit a square peg in a round hole. Distributed systems have to be designed as distributed systems and if your current software isn't (distributed) then just hacking at it will not be very fruitful IMHO.

  5. Yup. I've seen this not only with dynamic formatting but also in normal usage. It doesn't seem to happen with the silver indicator (but does with the silver frame-less). Have you tried to defer the update?

  6. Well. Some good news and some not so good news. But the "not so good news" is only a little bit not so good :P

     

    First the good news.

    I couldn't find MJEs interactive DOS thingy, but I found Rolfs pipes library (I don't think it ever got out of alpha, but it works).

     

    You can use this to run Plink.exe interactively instead of sending a command file (just remove the -m switch). I've just used it on my VPS and it was fine apart from some dodgy control characters that would need to be filtered, but I started an interactive session and was listing directories and sending commands and reading output..

     

    Now for the not so good news:

    Rolf wrapped the windows pipes in a DLL that is 32 bit. So flash a bit of leg and maybe post  some funny cat images and he might compile a 64 bit version for you ;)

     

    If Rolf does give you a 64 bit DLL, then I'll post an usage example to help you on your way.

  7. The examples tend to send the username and password in plain text. 

     

    This is not correct. SSH does encrypt the passwords etc, but it is still a much better idea to use public key authentication and turn off password access as it is impossible to brute force.

  8. I asked the guy at Labwerx about making a 64-bit version and there was a bit of whinging about how much time it would take, how much testing there would be, etc. Maybe I should volunteer to be a beta tester. But you never really know if there's a real company behind products like that or it's just some guy in his basement doing this in his spare time.

    Nothing wrong with a guy in his basement offering software ;) Difficult for defence companies however, that have preferred suppliers and require dedicated support channels. The way I usually get round this sort of thing is to offer a short term contract, that way it guarantees the contractor a wage for their work (some people ask for stuff then say naah, don't need it now) and it is an acceptable business interaction from the companies point of view.

     

    Unfortunately I can't just not use telnet -- telnet has to be removed from the system. But that may not be a problem... (see part B)

    A) current problem

     

    If I understand you (and the putty doc) correctly, if I want to talk to 192.168.3.24:5678 on a remote machine, I can run something like the following:

      c:tempputtyplink.exe -ssh -L 127.0.0.1:1234:192.168.3.24:5678 -i c:tempputtyputty_key_noPassword.ppk

    then just set up my telnet session to talk to 127.0.0.1:1234, and putty takes cares of getting the messages to/from the linux box via ssh. Yes?

    Last but not least, I assume I'm issuing the above command with "System Execute.vi"?

     

    Yes. But it is even easier than that. Just set everything up as a profile in PuTTY (being a point a clicky person I find a GUI is much better) then, when you need it, just run putty.exe -load myprofile (with system exec).

     

    B) just semantics?

    I just wandered thru the LV telnet commands and want to know if I'm missing something...

    It looks like all they are doing is setting up and monitoring a TCP connection to port 23. So in regular usage the connection is actually to some remote box that has a telnet server listening to port 23.

    But now instead of a telnet server, I will be talking to putty. So, in essence, I'm not running telnet on my machine, I'm just using the telnet mechanism to open a plain ole tcp connection on my own machine to putty and format writes/reads between my code and the remote machine.

    Am I off base here, or if I do all of the above (and it works!) am I actually *not* using telnet?

    Well. Telnet is a protocol but it is also used to refer to the client. Windows 7 doesn't come with a telnet client installed as default any more (which is called "telnet" and has to be installed with add/remove programs) but the term is used interchangeably.I don't find it very useful to get into semantic arguments with IT people who tend to be very anal, no sense of humour and completely unreasonable when it comes the their network. Just because you re not running the windows telnet client and are using labview, it still means you are running a telnet client, if they want to be difficult. I would speak to the person that issued the diktat and ask if it's acceptable to use the telnet protocol over SSH as LabVIEW has no native support for the SSH protocol and no 3rd parties have an acceptable solution.

     

    If they are only referring to removing the telnet client, then this is fine and would work. Be careful though. If they also say you need to remove the telnet server from the nix box, you will be stuffed.

     

    So far. I think the best of the evils (if puTTY doesn't do it) is the python approach since jzoller knows of a python SSH library. If you have plenty of C coders that need to justify their existence you could get them to port OpenSSH to windows (and compile it for 64 bit labVIEW). LabVIEW sucks at anything that has encryption/compression in it.

  9. I looked up Plink, and what documentation I found says several times "Plink is probably not what you want if you want to run an interactive session in a console window."  I don't want to run an interactive session in a console window, but I do want to run a sorta interactive session from LabVIEW. 

    It looks like what you've sent me is just how to run a one-time script.  Is that correct?

     

    Yes.

     

    I need to maintain a connection thru several command/response cycles where the next command may be dependent on the response received (all automated -- there is no user intervention other than selecting the box to talk to). I do this with telnet now -- it's just finding some way to do the session connect/maintain part with ssh versus telnet that I'm stuck on.

     

     

    This isn't easy in LabVIEW as the only interface you have is the on-shot Shell Command. MJE did put together an interactive CMD session that could be used for this purpose (not sure where it is though). However. You do not necessarily need to need to keep the link open to be able to achieve a pseudo interactive session but there is probably a simpler way and it's what putty was really designed for.

    By far the easiest solution for you would be to tunnel your telnet through SSH (you can tunnel anything through SSH :) ). Create a local proxy" that your telnet client connects to on your local machine (127.0.0.1) and it will be completely transparent, to your telnet client, but secure. You wouldn't even have to write any code ;)

     

    As a slight tangent. I also looked at that software package you mentioned earlier a while back. It looked great but relies on a DLL and is just a thin wrapper around that (not cross platform, so no good to me). You could get them to compile the DLL for 64 bit so that you could use it (or you could use LV 32 bit).

     

    The long and the short of it is that there is no native SSH library  support on windows (in any language, that I know of-maybe Rolf has something) and the only people that really care about it are linux bods since it is only really of any use to connect to linux boxes. Most windows users avoid linux like the plague and get by, when they need it, by tunneling with PuTTy. Windows users use VNC or Remote desktop rather than SSH to talk to remote computers.

     

     

    (Since you're being helpful, I'm ignoring the slam on us "point and clicky people" :P )

    I count myself in this category too and it annoys the hell out of linux bods when I say to them that all flavours of desktop Linux are themed windows 95 with an encrypted DOS Prompt :D

    If you want to really annoy them. When they rattle off a command line (because every Linux user has an Eidetic memory and therefore anyone that doesn't is an amoeba to be treated with scorn), just say to them "you lost me when you said 'type in' " ;)

  10. What's the issue with putty? I can connect to my VPS and execute commands with LabVIEW without any problems. (Maybe they are hesitant that you don't see anything in the DOS prompt as it is redirected to the Shell Execute)

     

    An important point, however......

     

    The examples tend to send the username and password in plain text. Do not do this! Instead you need to create a private and public key key pair (not very easy for point and clicky people, best to get the linux bods to do it for you and give you the private key-make sure they give you one that doesn't require a password) and tell the SSH server on the remote machine (again, linux bods) to only accept that key (or a couple of them if you have multiple users). Then putty will do a secure authenticated login using the keys.

     

    This is the command line you will need to execute (note the ppk key file). The -P parameter is the port number which should be non-default for good measure, but you can leave it out while you are getting it to work on the default port.

     

    C:tempputtyplink.exe myserver.com -P 32998 -ssh  -i C:Tempputtyputty_key_noPassword.ppk  -m C:Tempputtyscriptfile.txt

    Note: all my files including putty are in a c:tempputty directory

  11. Hi Manual,

     

    thanks for the report, the last commit by James should be fixing this.

    Shaun, I haven't seen this, by using a quite large JSON set, and parsing it twice in parallel, both methods can the same time.

     

    Regards,

     

    Ton

    "Set From JSON String.vi", "Name.vi" and "First Char.vi" are not set to re-entrant.

    Just downloaded the latest version and "Set From JSON String.vi" now seems to be reentrant which cures the blocking. The others are still not re-entrant though.

  12. Indeed, that's exactly what we are doing by breaking out this code to a stand-alone component, be it a DLL or EXE. It will be self-contained with it's own versions of whatever it needs.

     

    The DLL is nice in that it allows relatively easy passing of parameters between the new and legacy code. If we go an EXE route, we need to create some sort of wrapper to pass things out since we don't have access to stdout/errout or the exit code in LabVIEW (passing things in is easy via the command line).

     

    DLLs cannot be run standalone. You always need a host application.

     

    Why do you need a parameter interface between "new and legacy" code? What I was suggesting is you just pass in a filename to process, a filename for output (cmd line parms) and hey presto you get the db that your real application can use. You can put it on your website for people to download if they need it (perhaps bundle it with the main app to begin with) and once their files have been converted, they will no longer require it at all. It never needs clutter your application code-base, rather,  a separate project as a bridge to deprecation of the old file format..

  13. definitely don't want to have to distribute multiple LabVIEW, MSVC, and SQLite redists, so I suppose this component would have to be recompiled in whatever flavor of run-times I'm using at the time. However the recompile would still be from legacy versions of our source code, and could be tested with legacy unit tests. Our core development would still be free from having to support the old API we were using in this legacy code.

    A common way of solving this with the minimum effort is to create a conversion tool. It only needs to be created once, then you can forget about it and eventually you will no longer need it. Some even offer the feature as a menu option in the main application which just invokes the converter. If it's easy to spit out your custom format as a delimited text file or a number of files, you can easily import them with the API File tools (2 steps rather than one, but may be a lot easier).

  14. Shaun, in theory you are right. In practice is a LabVIEW DLL a C wrapper for each function that invokes the according pre-compiled VI inside the DLL. As such there needs to be some runtime support to load and executed these VIs. This is usually happening inside the according LabVIEW runtime which is launched from the wrapper. Some kind of Münchhausen trick really. However at least in earlier versions of LabVIEW if the platform and LabVIEW version of the compiled DLL was the same as the calling process, then the wrapper invoked the VIs inside the DLL directly in the context of the calling LabVIEW process.

     

    Perhaps I didn't make it clear. I was not suggesting that other languages don't need the run-time. Just that they only need the 1 run-time as opposed to, say, a labview 2011 exe with a 2009 dll which needs two, I believe.

  15. I've never built a DLL using LabVIEW before, but am starting to think this may be the way to go for a component I'm working on. However I have one concern where my google-fu is failing me and thought I'd lob this one over the fence since at least of the lava gurus here likely knows the answer.

     

    Consider this situation:

     

    My DLL is built and includes SharedLibrary.lvlib (version 1).

    My EXE is built and includes SharedLibrary.lvlib (version 2).

     

    That is both the DLL and EXE reference the same library, all be it incompatible versions. Each fully includes all the dependencies they need to run on their own. With respect to each library version, the namespaces are identical-- that is we don't have SharedLibraryVersion1.lvlib and SharedLibraryVersion2.lvlib, but two different versions of SharedLibrary.lvlib.

     

    Now let's say my EXE needs to dynamically load my DLL: Do I have a problem? Am I going to run into any weird namespace collision issues? I would hope everything should be locked properly behind their respective boundaries, but...

     

    If this is a problem it's no big deal, I could always change the DLL into another executable, but I'd rather not as it makes a bit of a mess of passing data in and out.

    Well.

     

    A DLL is a self contained list of compiled executable functions (they are not equivalent to programs). If you call a function using the CLFN I don't think it has anything to do with any libraries you have placed on the block diagram (scope-wise). As long as the function exists in the DLL and the there is only one DLL with that name, that is all that is required (parameter lists of course need to be considered). Unless you are dynamically loading an external lvlib from the DLL (which is a bit silly), I don't really understand the question.

     

    DLLs are meant to make programs modular just as lvlibs are. It tends to be one or the other with lvlibs being native to labview. If you have compiled a lvlib into a DLL, then your exe will use whatever version of the lvlib you compiled the DLL with (your program only knows of the function name and parameters to pass). Replace the V1 DLL with the V2 DLL and your program will not know much is different unless the parameter lists have changed for the function calls. That's the whole point of them-so you can update/modify parts of the code without affecting/recompiling everything else.

     

    That said...... There are a couple of caveats that are peculiar to LabVIEW DLLs specifically. Rolf has outlined a bit of it in that LabVIEW DLLs have a huge dependency on the run-time and it's not easy to know what those dependencies are. So you can find yourself a while down the road backed into a corner and installing every version of labview run-time known to man to keep the various bits of your "modular" code that you have developed over several LV versions  working and wondering why, this time around, it runs like a slug.

     

    You also lose your cross-platform capabilities too! (Cannot create .so or dylibs/frameworks with LabVIEW)

     

    My advice is don't use Labview DLLs unless it's for use in another programming language and your options to provide an interface for them are limited. Other languages executables don't have the same dependencies as LabVIEW executables so are less likely to run into version problems between the DLL and the program itself.

    • Like 1
  16. Hi Manual,

     

    thanks for the report, the last commit by James should be fixing this.

    Shaun, I haven't seen this, by using a quite large JSON set, and parsing it twice in parallel, both methods can the same time.

     

    Regards,

     

    Ton

     

    It seems to be blocking with arrays.

     

    Here's an example to demonstrate. The text file is made up of giant arrays so it makes it much more obvious (takes about .4 mins to execute)

     

    I'll take a closer look on Sunday. A bit strapped for time at the moment.

    Hi again

     

    Regarding the post above I took the liberty of adding the functionality to the project.

    I have attached a zip file containing only the VI's I have added or changed (folder structure has been kept) - feel free to add it to the project if you want..

     

    Changes:

     + Added VI's for updating an existing JSON Collection Object value - by Array of names

     + Added VI's for removing a JSON Collection Object - by Array of names

     * Changed polymorphic Set VI to include new features .. the 'Set Object' function has been made into a submenu..

     

    Best Regards

    Stinus

     

    Sweet.

    I'll take a gander later.

  17. Vesion 3 is still using the First Call and the feedback nodes. This is causing your program to hang when you call Init a second time. Just throw a For Loop around your test program and wire a 2 to the N terminal. Run the VI. It won't finish. The First Call value and the Feedback Node value have to be part of the pointer block. End of story.

     

    Also, you crash if a pointer block that has never been initialized is passed in... need to check for NULL on every call. Test this by just hitting Run on your Read or your Write VI. LV goes away.  You've got guards on DeInit ... need same guards on read/write.

     

    I fixed both the hang and the crash on never initialized.

     

    Found another crash scenario -- DeInit called twice on the same pointer block. I can't do anything about that or the crash caused by DeInit being called while Read or Write are still running unless/until we figure out how to generate a CAS instruction or some other equivalent atomic instruction.

    attachicon.gifCircular_BufferV4_AQ.zip

    Indeed. There is zero error checking in the prototyping beyond what prevented me from achieving the feasability testing(thats one of the reasons why I call it prototyping rather than alpha or beta). Now I have piqued interest, we can start productionising and locking down the robustness. We also need to do the standard checks like ensuring that allocation actually succeeds etc. I'll add some of your changes to the next release.

    There is no point in adding a first call flag to the pointer block. That would require shift registers in the users code. Although you have added it, you haven't actually used it have you?

     

    Wow, you two got way ahead of my ability to follow this thread, took me a while to catch up. Needless to say my playing about is way behind what you have been thinking about.

     

     

    This is a very interesting problem. Well for me it is. While a DSCheckPtr call would help in that specific case, It wouldn't likely be robust if LabVIEW is in the habit of regularly recycling memory as might be done in a real application. The check is near useless if you don't check the pointer under some kind of lock-- there's a race condition because who is to say the pointer wasn't released after you check but before you operate on the pointer? It's easy to see if you do the check before entering the loop and have to do a significant wait, but even if you check it on every iteration there's still the possibility. Of the pointer being released between calls.

     

    What if in every set of pointers has an additional sentinel pointer was allocated? The value in this sentinel would tell us if rest of the pointers were still usable. When uninitialize is done, all the pointers are released except the sentinel, which is instead is operated on to guard against the rest of the structure being used. However this causes a memory leak: we need someway to release this sentinel pointer. Is there a way to register a callback with LabVIEW such that when whatever VI hierarchy goes idle which started this whole mess, we can invoke some code to release the sentinel? I imagine registering sentinel pointers somewhere, and when the callback is invoked releasing them.

     

    The issue of the pointer being released while a read/write is stuck in its polling loop also needs to be addressed. If someone splits a wire and manages to block a read/write call while uninit is called bad things will happen. We may have to build a lock into read/write that is shared with uninit. Don't panic, I don't mean a traditional LabVIEW lock-- I think we can do this with another pointer. Here's my logic. Say we have our private data as something like this (ignoring the buffer proper since it's not part of the discussion):

    <snip>

    Of course for any of that to work, we need atomic operations on the move/swap calls. Rolf's earlier statements worry me that we don't have that. Is there some low level function/instruction we have in LabVIEW that can be used to implement something like this? I've never delved so greedily in to the depths of LabVIEW before...

    I think I will pretty much be echoing what others are saying when I say DSCheclPtr isn't even worth being a function. The only thing it seems to check is if it is not-null (pass in "1" and it will pass). Not surprising really, it's the same problem in C.

    Any pointer checking is probably not going to yield a robust method. The method I described earlier works really well. It doesn't rely on the readers or writers trying to detect pointers. It relies on not being able to deallocate until everything has finished using them. This raises a slightly different issue though.

    It works great for asynchronous deallocation as long as the readers and writers are able to propagate their state. In a while loop this is fine as an extra iteration is possible so that they can read the registration bit and un-set their active bit. Not so good for the fixed for-loops though as the extra iteration cannot happen if you wait until all have completed their full number of iterations (works ok before then though).

  18. Of course, LV itself is already pretty strict, with the exception of that one pesky feature which will freely change wire types automatically and actually decide *at run-time* to run a completely different function from the one that's actually on the diagram. Maybe that feature set should be removed from the language. :throwpc:
    +1. There was no such thing as a run-time error until that came in ;)
  19. Ooooh. I missed this bit.

    I'm not sure if I am now answering my modified readers list suggestion with the managers bit AND the readers bit or not. So I'l plough ahead with the assumption that this is in response to that scenario (apologies if that's not the case)

     

    Let's see...

     

     

    We have Read 0, Read 1, Writer, and Deinit all executing at the same time.

     

    Here's one possible execution order...

     

    Deinit sets the "NowShuttingDown" bit to high. Then Deinit does an atomic compare-and-swap for the WriteIsActive bit (in other words, "if WriteIsActive bit is false, set it to true, return whether or not the set succeeded". It succeeds. Writer now tries to do an atomic compare-and-swap for the WriteIsActive, discovers the bit is already high and so returns an error -- it decides between "WriterAlreadyInUse" and "ReferenceIsStale" by checking the "NowShuttingDown" bit. Then Read1 does an atomic compare-and-swap on Read1IsActive. DeInit then tries to do the same and discovers the bit is already high, so it goes into a polling loop, waiting for the bit to go low. Read1 finishes its work and lowers the bit. Possibly this repeats several times because LV might be having a bad day and we might get around to several calls to Read1 before the timeslice comes down right to let the DeInit proceed (possible starvation case; unlikely since Writer has already been stopped, but it highlights that had this been the Writer that got ahead of DeInit, we might keep writing indefinitely waiting for the slices to work out). But let's say eventually DeInit proceeds and sets Read1IsActive high. The next Read1 that comes through errors out. Having now blocked all the readers and the writer, DeInit deallocates the buffer blocks and index blocks, but not the block of Boolean values. Any other attempts to read/write will check the Booleans, find that the ops are already in use and then check the NowShuttingDown bit to return the right error. (Note that they can't check NowShuttingDown at the outset because they do not set that bit, which means there'd be an open race condition and a reader might crash because DeInit would throw away its buffer while it is reading.)

     


    The situation above is pretty standard ops control provided you know that the Boolean set will remain valid. If you're ok with leaving that allocation in play for as long as the application runs (without reusing it the next time Init gets called -- once it goes stale it has to stay stale or you risk contaminating pointers that should have errored out with the next iterations buffers) then I think this will work.

     

    No. The Read 1 and DeInit  are not operating on the same memory location. Therefore there is no requirement for CAS and as only one writer can write to any bit, there is no race condition. Each writes to it's own bit associated with each reader (the regman writes to several locations in the  list but it is still the only writer to those locations. The readers each handle their own "active" bit so they each are still the only writer for their bit in the list). The writer only reads both bits for all readers to determine.

     

    a) Is this reader to be included in the lowest check (only the registration managers bit is important for this)

    b) Are there any readers at all (all the regmans bits are false AND all the readers' bits are false)->exit and set "Finished".

     

    In this scenario, it is the writer that exits signals everything is OK to kill the pointers (it is monitoring all the bits), not deinit. Deinit unregisters all the readers then waits for the "finished" bit to go high then it deallocates. The"active" bit in each reader is a proxy for the regmans bit in that the regmans bit is read and then the value written to the "active" bit for that reader on exit (no other reads/writes happen after) When the "finished" bit goes high all readers and the writer have already exited and will not be reading/writing to any pointers. 

     

    I'm hoping to have a knock-up of the registration over the weekend to test the idea and see what I run into (depends how drunk I get Saturday night-hangovers don't get worse as you get older, they just get longer :) ).

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.