-
Posts
4,942 -
Joined
-
Days Won
308
Content Type
Profiles
Forums
Downloads
Gallery
Posts posted by ShaunR
-
-
Option 3.
The server has everything labview and the client is a browser with javascript prompts and interfaces. You wouldn't even need a labview run-time on the clients.
-
Yup. I've seen this not only with dynamic formatting but also in normal usage. It doesn't seem to happen with the silver indicator (but does with the silver frame-less). Have you tried to defer the update?
-
Well. Some good news and some not so good news. But the "not so good news" is only a little bit not so good
First the good news.
I couldn't find MJEs interactive DOS thingy, but I found Rolfs pipes library (I don't think it ever got out of alpha, but it works).
You can use this to run Plink.exe interactively instead of sending a command file (just remove the -m switch). I've just used it on my VPS and it was fine apart from some dodgy control characters that would need to be filtered, but I started an interactive session and was listing directories and sending commands and reading output..
Now for the not so good news:
Rolf wrapped the windows pipes in a DLL that is 32 bit. So flash a bit of leg and maybe post some funny cat images and he might compile a 64 bit version for you
If Rolf does give you a 64 bit DLL, then I'll post an usage example to help you on your way.
-
-
The examples tend to send the username and password in plain text.
This is not correct. SSH does encrypt the passwords etc, but it is still a much better idea to use public key authentication and turn off password access as it is impossible to brute force.
-
I asked the guy at Labwerx about making a 64-bit version and there was a bit of whinging about how much time it would take, how much testing there would be, etc. Maybe I should volunteer to be a beta tester. But you never really know if there's a real company behind products like that or it's just some guy in his basement doing this in his spare time.
Nothing wrong with a guy in his basement offering software
Difficult for defence companies however, that have preferred suppliers and require dedicated support channels. The way I usually get round this sort of thing is to offer a short term contract, that way it guarantees the contractor a wage for their work (some people ask for stuff then say naah, don't need it now) and it is an acceptable business interaction from the companies point of view.
Unfortunately I can't just not use telnet -- telnet has to be removed from the system. But that may not be a problem... (see part B)A) current problem
If I understand you (and the putty doc) correctly, if I want to talk to 192.168.3.24:5678 on a remote machine, I can run something like the following:
c:tempputtyplink.exe -ssh -L 127.0.0.1:1234:192.168.3.24:5678 -i c:tempputtyputty_key_noPassword.ppk
then just set up my telnet session to talk to 127.0.0.1:1234, and putty takes cares of getting the messages to/from the linux box via ssh. Yes?
Last but not least, I assume I'm issuing the above command with "System Execute.vi"?
Yes. But it is even easier than that. Just set everything up as a profile in PuTTY (being a point a clicky person I find a GUI is much better) then, when you need it, just run putty.exe -load myprofile (with system exec).
B) just semantics?I just wandered thru the LV telnet commands and want to know if I'm missing something...
It looks like all they are doing is setting up and monitoring a TCP connection to port 23. So in regular usage the connection is actually to some remote box that has a telnet server listening to port 23.
But now instead of a telnet server, I will be talking to putty. So, in essence, I'm not running telnet on my machine, I'm just using the telnet mechanism to open a plain ole tcp connection on my own machine to putty and format writes/reads between my code and the remote machine.
Am I off base here, or if I do all of the above (and it works!) am I actually *not* using telnet?
Well. Telnet is a protocol but it is also used to refer to the client. Windows 7 doesn't come with a telnet client installed as default any more (which is called "telnet" and has to be installed with add/remove programs) but the term is used interchangeably.I don't find it very useful to get into semantic arguments with IT people who tend to be very anal, no sense of humour and completely unreasonable when it comes the their network. Just because you re not running the windows telnet client and are using labview, it still means you are running a telnet client, if they want to be difficult. I would speak to the person that issued the diktat and ask if it's acceptable to use the telnet protocol over SSH as LabVIEW has no native support for the SSH protocol and no 3rd parties have an acceptable solution.
If they are only referring to removing the telnet client, then this is fine and would work. Be careful though. If they also say you need to remove the telnet server from the nix box, you will be stuffed.
So far. I think the best of the evils (if puTTY doesn't do it) is the python approach since jzoller knows of a python SSH library. If you have plenty of C coders that need to justify their existence you could get them to port OpenSSH to windows (and compile it for 64 bit labVIEW). LabVIEW sucks at anything that has encryption/compression in it.
-
-
I looked up Plink, and what documentation I found says several times "Plink is probably not what you want if you want to run an interactive session in a console window." I don't want to run an interactive session in a console window, but I do want to run a sorta interactive session from LabVIEW.
It looks like what you've sent me is just how to run a one-time script. Is that correct?
Yes.
I need to maintain a connection thru several command/response cycles where the next command may be dependent on the response received (all automated -- there is no user intervention other than selecting the box to talk to). I do this with telnet now -- it's just finding some way to do the session connect/maintain part with ssh versus telnet that I'm stuck on.This isn't easy in LabVIEW as the only interface you have is the on-shot Shell Command. MJE did put together an interactive CMD session that could be used for this purpose (not sure where it is though). However. You do not necessarily need to need to keep the link open to be able to achieve a pseudo interactive session but there is probably a simpler way and it's what putty was really designed for.
By far the easiest solution for you would be to tunnel your telnet through SSH (you can tunnel anything through SSH
). Create a local proxy" that your telnet client connects to on your local machine (127.0.0.1) and it will be completely transparent, to your telnet client, but secure. You wouldn't even have to write any code
As a slight tangent. I also looked at that software package you mentioned earlier a while back. It looked great but relies on a DLL and is just a thin wrapper around that (not cross platform, so no good to me). You could get them to compile the DLL for 64 bit so that you could use it (or you could use LV 32 bit).
The long and the short of it is that there is no native SSH library support on windows (in any language, that I know of-maybe Rolf has something) and the only people that really care about it are linux bods since it is only really of any use to connect to linux boxes. Most windows users avoid linux like the plague and get by, when they need it, by tunneling with PuTTy. Windows users use VNC or Remote desktop rather than SSH to talk to remote computers.
(Since you're being helpful, I'm ignoring the slam on us "point and clicky people"
)
I count myself in this category too and it annoys the hell out of linux bods when I say to them that all flavours of desktop Linux are themed windows 95 with an encrypted DOS Prompt
If you want to really annoy them. When they rattle off a command line (because every Linux user has an Eidetic memory and therefore anyone that doesn't is an amoeba to be treated with scorn), just say to them "you lost me when you said 'type in' "
-
Flip (mirror) the image.
-
What's the issue with putty? I can connect to my VPS and execute commands with LabVIEW without any problems. (Maybe they are hesitant that you don't see anything in the DOS prompt as it is redirected to the Shell Execute)
An important point, however......
The examples tend to send the username and password in plain text. Do not do this! Instead you need to create a private and public key key pair (not very easy for point and clicky people, best to get the linux bods to do it for you and give you the private key-make sure they give you one that doesn't require a password) and tell the SSH server on the remote machine (again, linux bods) to only accept that key (or a couple of them if you have multiple users). Then putty will do a secure authenticated login using the keys.
This is the command line you will need to execute (note the ppk key file). The -P parameter is the port number which should be non-default for good measure, but you can leave it out while you are getting it to work on the default port.
C:tempputtyplink.exe myserver.com -P 32998 -ssh -i C:Tempputtyputty_key_noPassword.ppk -m C:Tempputtyscriptfile.txtNote: all my files including putty are in a c:tempputty directory
-
"Set From JSON String.vi", "Name.vi" and "First Char.vi" are not set to re-entrant.Hi Manual,thanks for the report, the last commit by James should be fixing this.
Shaun, I haven't seen this, by using a quite large JSON set, and parsing it twice in parallel, both methods can the same time.
Regards,
Ton
Just downloaded the latest version and "Set From JSON String.vi" now seems to be reentrant which cures the blocking. The others are still not re-entrant though.
-
Indeed, that's exactly what we are doing by breaking out this code to a stand-alone component, be it a DLL or EXE. It will be self-contained with it's own versions of whatever it needs.
The DLL is nice in that it allows relatively easy passing of parameters between the new and legacy code. If we go an EXE route, we need to create some sort of wrapper to pass things out since we don't have access to stdout/errout or the exit code in LabVIEW (passing things in is easy via the command line).
DLLs cannot be run standalone. You always need a host application.
Why do you need a parameter interface between "new and legacy" code? What I was suggesting is you just pass in a filename to process, a filename for output (cmd line parms) and hey presto you get the db that your real application can use. You can put it on your website for people to download if they need it (perhaps bundle it with the main app to begin with) and once their files have been converted, they will no longer require it at all. It never needs clutter your application code-base, rather, a separate project as a bridge to deprecation of the old file format..
-
definitely don't want to have to distribute multiple LabVIEW, MSVC, and SQLite redists, so I suppose this component would have to be recompiled in whatever flavor of run-times I'm using at the time. However the recompile would still be from legacy versions of our source code, and could be tested with legacy unit tests. Our core development would still be free from having to support the old API we were using in this legacy code.
A common way of solving this with the minimum effort is to create a conversion tool. It only needs to be created once, then you can forget about it and eventually you will no longer need it. Some even offer the feature as a menu option in the main application which just invokes the converter. If it's easy to spit out your custom format as a delimited text file or a number of files, you can easily import them with the API File tools (2 steps rather than one, but may be a lot easier).
-
Shaun, in theory you are right. In practice is a LabVIEW DLL a C wrapper for each function that invokes the according pre-compiled VI inside the DLL. As such there needs to be some runtime support to load and executed these VIs. This is usually happening inside the according LabVIEW runtime which is launched from the wrapper. Some kind of Münchhausen trick really. However at least in earlier versions of LabVIEW if the platform and LabVIEW version of the compiled DLL was the same as the calling process, then the wrapper invoked the VIs inside the DLL directly in the context of the calling LabVIEW process.
Perhaps I didn't make it clear. I was not suggesting that other languages don't need the run-time. Just that they only need the 1 run-time as opposed to, say, a labview 2011 exe with a 2009 dll which needs two, I believe.
-
I’m about to go on holiday for two weeks, so I’m going to post a CR version with just the bug fixes made by James McNally, leaving other issues for later.
Why not just wait until you come back when the others will have been addressed too?
-
I've never built a DLL using LabVIEW before, but am starting to think this may be the way to go for a component I'm working on. However I have one concern where my google-fu is failing me and thought I'd lob this one over the fence since at least of the lava gurus here likely knows the answer.
Consider this situation:
My DLL is built and includes SharedLibrary.lvlib (version 1).
My EXE is built and includes SharedLibrary.lvlib (version 2).
That is both the DLL and EXE reference the same library, all be it incompatible versions. Each fully includes all the dependencies they need to run on their own. With respect to each library version, the namespaces are identical-- that is we don't have SharedLibraryVersion1.lvlib and SharedLibraryVersion2.lvlib, but two different versions of SharedLibrary.lvlib.
Now let's say my EXE needs to dynamically load my DLL: Do I have a problem? Am I going to run into any weird namespace collision issues? I would hope everything should be locked properly behind their respective boundaries, but...
If this is a problem it's no big deal, I could always change the DLL into another executable, but I'd rather not as it makes a bit of a mess of passing data in and out.
Well.
A DLL is a self contained list of compiled executable functions (they are not equivalent to programs). If you call a function using the CLFN I don't think it has anything to do with any libraries you have placed on the block diagram (scope-wise). As long as the function exists in the DLL and the there is only one DLL with that name, that is all that is required (parameter lists of course need to be considered). Unless you are dynamically loading an external lvlib from the DLL (which is a bit silly), I don't really understand the question.
DLLs are meant to make programs modular just as lvlibs are. It tends to be one or the other with lvlibs being native to labview. If you have compiled a lvlib into a DLL, then your exe will use whatever version of the lvlib you compiled the DLL with (your program only knows of the function name and parameters to pass). Replace the V1 DLL with the V2 DLL and your program will not know much is different unless the parameter lists have changed for the function calls. That's the whole point of them-so you can update/modify parts of the code without affecting/recompiling everything else.
That said...... There are a couple of caveats that are peculiar to LabVIEW DLLs specifically. Rolf has outlined a bit of it in that LabVIEW DLLs have a huge dependency on the run-time and it's not easy to know what those dependencies are. So you can find yourself a while down the road backed into a corner and installing every version of labview run-time known to man to keep the various bits of your "modular" code that you have developed over several LV versions working and wondering why, this time around, it runs like a slug.
You also lose your cross-platform capabilities too! (Cannot create .so or dylibs/frameworks with LabVIEW)
My advice is don't use Labview DLLs unless it's for use in another programming language and your options to provide an interface for them are limited. Other languages executables don't have the same dependencies as LabVIEW executables so are less likely to run into version problems between the DLL and the program itself.
-
1
-
-
Hi Manual,
thanks for the report, the last commit by James should be fixing this.
Shaun, I haven't seen this, by using a quite large JSON set, and parsing it twice in parallel, both methods can the same time.
Regards,
Ton
It seems to be blocking with arrays.
Here's an example to demonstrate. The text file is made up of giant arrays so it makes it much more obvious (takes about .4 mins to execute)
I'll take a closer look on Sunday. A bit strapped for time at the moment.
Hi againRegarding the post above I took the liberty of adding the functionality to the project.
I have attached a zip file containing only the VI's I have added or changed (folder structure has been kept) - feel free to add it to the project if you want..
Changes:
+ Added VI's for updating an existing JSON Collection Object value - by Array of names
+ Added VI's for removing a JSON Collection Object - by Array of names
* Changed polymorphic Set VI to include new features .. the 'Set Object' function has been made into a submenu..
Best Regards
Stinus
Sweet.
I'll take a gander later.
-
Vesion 3 is still using the First Call and the feedback nodes. This is causing your program to hang when you call Init a second time. Just throw a For Loop around your test program and wire a 2 to the N terminal. Run the VI. It won't finish. The First Call value and the Feedback Node value have to be part of the pointer block. End of story.
Also, you crash if a pointer block that has never been initialized is passed in... need to check for NULL on every call. Test this by just hitting Run on your Read or your Write VI. LV goes away. You've got guards on DeInit ... need same guards on read/write.
I fixed both the hang and the crash on never initialized.
Found another crash scenario -- DeInit called twice on the same pointer block. I can't do anything about that or the crash caused by DeInit being called while Read or Write are still running unless/until we figure out how to generate a CAS instruction or some other equivalent atomic instruction.
Indeed. There is zero error checking in the prototyping beyond what prevented me from achieving the feasability testing(thats one of the reasons why I call it prototyping rather than alpha or beta). Now I have piqued interest, we can start productionising and locking down the robustness. We also need to do the standard checks like ensuring that allocation actually succeeds etc. I'll add some of your changes to the next release.
There is no point in adding a first call flag to the pointer block. That would require shift registers in the users code. Although you have added it, you haven't actually used it have you?
I think I will pretty much be echoing what others are saying when I say DSCheclPtr isn't even worth being a function. The only thing it seems to check is if it is not-null (pass in "1" and it will pass). Not surprising really, it's the same problem in C.Wow, you two got way ahead of my ability to follow this thread, took me a while to catch up. Needless to say my playing about is way behind what you have been thinking about.This is a very interesting problem. Well for me it is. While a DSCheckPtr call would help in that specific case, It wouldn't likely be robust if LabVIEW is in the habit of regularly recycling memory as might be done in a real application. The check is near useless if you don't check the pointer under some kind of lock-- there's a race condition because who is to say the pointer wasn't released after you check but before you operate on the pointer? It's easy to see if you do the check before entering the loop and have to do a significant wait, but even if you check it on every iteration there's still the possibility. Of the pointer being released between calls.
What if in every set of pointers has an additional sentinel pointer was allocated? The value in this sentinel would tell us if rest of the pointers were still usable. When uninitialize is done, all the pointers are released except the sentinel, which is instead is operated on to guard against the rest of the structure being used. However this causes a memory leak: we need someway to release this sentinel pointer. Is there a way to register a callback with LabVIEW such that when whatever VI hierarchy goes idle which started this whole mess, we can invoke some code to release the sentinel? I imagine registering sentinel pointers somewhere, and when the callback is invoked releasing them.
The issue of the pointer being released while a read/write is stuck in its polling loop also needs to be addressed. If someone splits a wire and manages to block a read/write call while uninit is called bad things will happen. We may have to build a lock into read/write that is shared with uninit. Don't panic, I don't mean a traditional LabVIEW lock-- I think we can do this with another pointer. Here's my logic. Say we have our private data as something like this (ignoring the buffer proper since it's not part of the discussion):
<snip>
Of course for any of that to work, we need atomic operations on the move/swap calls. Rolf's earlier statements worry me that we don't have that. Is there some low level function/instruction we have in LabVIEW that can be used to implement something like this? I've never delved so greedily in to the depths of LabVIEW before...
Any pointer checking is probably not going to yield a robust method. The method I described earlier works really well. It doesn't rely on the readers or writers trying to detect pointers. It relies on not being able to deallocate until everything has finished using them. This raises a slightly different issue though.
It works great for asynchronous deallocation as long as the readers and writers are able to propagate their state. In a while loop this is fine as an extra iteration is possible so that they can read the registration bit and un-set their active bit. Not so good for the fixed for-loops though as the extra iteration cannot happen if you wait until all have completed their full number of iterations (works ok before then though).
-
+1. There was no such thing as a run-time error until that came inOf course, LV itself is already pretty strict, with the exception of that one pesky feature which will freely change wire types automatically and actually decide *at run-time* to run a completely different function from the one that's actually on the diagram. Maybe that feature set should be removed from the language. -
Ooooh. I missed this bit.
I'm not sure if I am now answering my modified readers list suggestion with the managers bit AND the readers bit or not. So I'l plough ahead with the assumption that this is in response to that scenario (apologies if that's not the case)
Let's see...We have Read 0, Read 1, Writer, and Deinit all executing at the same time.
Here's one possible execution order...
Deinit sets the "NowShuttingDown" bit to high. Then Deinit does an atomic compare-and-swap for the WriteIsActive bit (in other words, "if WriteIsActive bit is false, set it to true, return whether or not the set succeeded". It succeeds. Writer now tries to do an atomic compare-and-swap for the WriteIsActive, discovers the bit is already high and so returns an error -- it decides between "WriterAlreadyInUse" and "ReferenceIsStale" by checking the "NowShuttingDown" bit. Then Read1 does an atomic compare-and-swap on Read1IsActive. DeInit then tries to do the same and discovers the bit is already high, so it goes into a polling loop, waiting for the bit to go low. Read1 finishes its work and lowers the bit. Possibly this repeats several times because LV might be having a bad day and we might get around to several calls to Read1 before the timeslice comes down right to let the DeInit proceed (possible starvation case; unlikely since Writer has already been stopped, but it highlights that had this been the Writer that got ahead of DeInit, we might keep writing indefinitely waiting for the slices to work out). But let's say eventually DeInit proceeds and sets Read1IsActive high. The next Read1 that comes through errors out. Having now blocked all the readers and the writer, DeInit deallocates the buffer blocks and index blocks, but not the block of Boolean values. Any other attempts to read/write will check the Booleans, find that the ops are already in use and then check the NowShuttingDown bit to return the right error. (Note that they can't check NowShuttingDown at the outset because they do not set that bit, which means there'd be an open race condition and a reader might crash because DeInit would throw away its buffer while it is reading.)
The situation above is pretty standard ops control provided you know that the Boolean set will remain valid. If you're ok with leaving that allocation in play for as long as the application runs (without reusing it the next time Init gets called -- once it goes stale it has to stay stale or you risk contaminating pointers that should have errored out with the next iterations buffers) then I think this will work.No. The Read 1 and DeInit are not operating on the same memory location. Therefore there is no requirement for CAS and as only one writer can write to any bit, there is no race condition. Each writes to it's own bit associated with each reader (the regman writes to several locations in the list but it is still the only writer to those locations. The readers each handle their own "active" bit so they each are still the only writer for their bit in the list). The writer only reads both bits for all readers to determine.
a) Is this reader to be included in the lowest check (only the registration managers bit is important for this)
b) Are there any readers at all (all the regmans bits are false AND all the readers' bits are false)->exit and set "Finished".
In this scenario, it is the writer that exits signals everything is OK to kill the pointers (it is monitoring all the bits), not deinit. Deinit unregisters all the readers then waits for the "finished" bit to go high then it deallocates. The"active" bit in each reader is a proxy for the regmans bit in that the regmans bit is read and then the value written to the "active" bit for that reader on exit (no other reads/writes happen after) When the "finished" bit goes high all readers and the writer have already exited and will not be reading/writing to any pointers.
I'm hoping to have a knock-up of the registration over the weekend to test the idea and see what I run into (depends how drunk I get Saturday night-hangovers don't get worse as you get older, they just get longer
).
-
"one implementation pattern to rule them all," then no, to the best of my knowledge nobody has found one.
The "ball-of-mud" is the one pattern that rules them all
Just don't use it on any of my projects
-
1
-
-
You Init. Then you Uninit. Then you take that same already-uninitialized pointer block and wire it into a Read or a Write. How does the Read or Write know that the pointer block has been deallocated and it should return an error?
As it stands. Yes. The pointer cluster is "on-the-wire" and when we deinit we just leave the cluster alone. But it doesn't have to be (just makes it easier to debug). If I were to start on a "Classic LabVIEW" API, I would probably also shove the pointer cluster into memory and you wouldn't need a "reference" wire at all, hell, even a global variable to contain them (no "not allocated" issues then). With the former, I might use the "CheckPointer" call if it wasn't too expensive and it does what I think it does as belt and braces with a check for null pointer on the cluster (if the cluster doesn't exist then neither do the others and vice versa). But if you are looking at classes I thought you would want to manage all that in the class.....somehow. If I put everything into memory and handle all the scenarios, there isn't much point in a class at all apart from making people feel warm and fuzzy about OOP.
I think the issue you are probably running into is that you are finding the best means to achieve what you need in a class is a DVR. But you can't use them because of the locking overhead. If you find you are looking to a DVR,, then the structure needs to be via the MM functions as they are basically just a method to provide the same function, but without the DVRs locking overhead. Anything else can "probably" be in the class.
Nope... there's no registered Booleans to check if the pointers have been deallocated. So to implement this solution, we would have to say that there's an Init but once allocated, the pointer at least to the Booleans needs to stay allocated until the program finishes running. Otherwise the first operation that tries to check the Booleans will crash. Are you ok with a "once allocated always allocated" approach?
I would be "happy for now". I don't think the unallocated pointers are an insurmountable issue and at worst we just need some state flags. I would come back to it on another iteration to see what needs to change and what the performance impact is of a million flags all over the place.
There's still the problem of setting those Booleans. We'll need a test-and-set atomic instruction for "reader is active" -- I don't know of any way to implement that with the current APIs that LabVIEW exposes.
I don't think we do (need a test 'n set). The premise of the pattern is mutual exclusion through memory barriers only. As long as you only have one writer to any location then no test and set is necessary. As we have solved the issue of accessing individual elements in arrays without affecting others or locking the rest of the array, all we need to ensure is that write responsibility is well defined (only one writer to a single location or block). The only time a test 'n set would be required is if we couldn't guarantee atomic reads and writes of the individual bits (PPC?).
As an aside. Anecdotally, it seems writing all contents to a cluster is an atomic operation in labview and incurs a marginal overhead as opposed to accessing the memory locations independently. The overhead is miniscule in comparison to the extra library calls required to achieve the latter. If this can be confirmed, then it is the fastest way to add atomicity to entire blocks of locations when needed. This is one reason why I use a cluster for the elements in the index array..
-
Looks good to me. Any other changes before I make a new VIPM package?
One thing that I haven't gotten round to yet. If you re operating on a large JSON stream, you cannot process any other JSON streams as it seems to beblocking. I think it just needs setting some of the subVIs to re-entrant, but like I said. I haven't gotten round to looking as yet.
-
A more complete rendition of what's in my head...
You currently have a cluster of pointers. We can move those into a class. We can then make the class look like a reference type (because that's exactly what it is). That's good. And that's all we need to do IF we can solve the "references are not valid any more" problem. The ONLY way I know to do that is with some sort of scheme to check a list to see if the pointers are still valid coupled with a way to prevent a recently deallocated number from coming back into use. That's what LabVIEW's refnum scheme provides. Without such a scheme, a deallocated memory pointer comes right back into play -- often immediately because the allocation blocks are the same size, so those are given preference by memory allocation systems. Thus any scheme like this in my view has to add a refnum layer between it and the actual pointer blocks. The list of currently valid pointers is guarded with a mutex -- every read and write operation at its start locks the list, increments the op count if it is still on the list, releases the mutex, does its work, then acquires the mutex again to lower the op count. The delete acquires the mutex, sets a flag that says "no one can raise the op count again", then releases the mutex and waits for the opcount to hit zero then throws away the pointer block.
That's my argument why we need a refnum layer. There might be a way to implement this without that layer, but I do not know what that would be.
As you know. Any mutexes and we will be back to square one.
I'm still not quite getting it. Why do we need an "op count"?. All pointers are considered valid until we unregister all readers and writers. Unregistering a single reader amongst multiple readers doesn't mean we need to free any pointers as there are only three (the buffer, the reader index array and the Cursor). The only time we deallocate anything is when there is no longer any readers or writers at which point we deallocate all pointers.
Now. We already have a list of readers (the booleans set to true in the index array) and we know how many (reader count). The adding and removing of readers, i.e. the manipulation of the booleans and the count, is "locked" by the registration manager (non-reentrant VI boundary). So I see the "issue" as how do we know that any asynchronous readers on the block diagram have read whatever flags and exited and therefore the writer can now exit and pointers can be deallocated (the writer only exits when there are no readers (race condition on startup? Will have to play......).
The writer won't read any reader booleans since by this time the registration manager has set the reader count to zero (this order will change with my proposal below since it will exit before it is zero). So it goes into a NOP state and exits. It can set a boolen that says "I have no readers and have exited". The registration manager already knows there are no readers, so it just needs to know the writer has exited before deallocating pointers.
The readers only need to watch their flag to see if the registration manager has changed it and go into a NOP state and exit. At this point I can see there might be a scenario whereby the reader has read it's flag (which is still OK) and by the time it gets to reading buffers and writing indexes the writer has said "I have no readers and have exited". Well. Lets put another flag in the reader index array that says "I'm active" which is just a copy of the registration managers boolean but only written once all memory reads have completed and causes an exit . Now we have a situation where deallocation can only occur if the booleans controlled by the registration manager are all false (or true depending on which sense is safer) AND the "I am active" booleans controlled by the individual readers are all false (ditto sense) AND the writer has said "I have no readers and have exited". This decomposes into just the writer saying "I have no readers and have exited" as the writer can read both fields whilst it is doing it's thing (it has to read the entire block anyway), AND them together and exit when they are all false (sense again) and set the "I have no readers and have exited".
So in the end-game. The registration manager unsets all the registered booleans, waits for the "I have no readers and have exited" boolean then deallocates the pointers. Does this seem reasonable?
PS: Even if we don't need a refnum layer and we find a way to do this with just the pointers stored in a class' private data, when the wire is cyan and single pixel, many people will still refer to it as a refnum (often myself included) because references are refnums in LabVIEW. The substitution is easy to make. Just sayin'. :-)Being colour-blind (colour confused is a better term). I really have no opinion on this
We offer you the option of crashing because if you're code is well written, it is faster to execute than for us to guard against the crash. You're free to choose the high performance route. ;-)Well. Get rid of the "Check For Errors" page then
(I've never had an error out via this route since about LV 7.1)
running a local VI in a remote application instance
in User Interface
Posted
So now you are talking about something completely different. These things are infrastructure topologies. (more like Dispatcher).and don't have a lot to do with showing prompts remotely et. al. It almost sounds (without a knowing exactly what you have at the moment) like you are trying to fit a square peg in a round hole. Distributed systems have to be designed as distributed systems and if your current software isn't (distributed) then just hacking at it will not be very fruitful IMHO.