Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Posts posted by ShaunR

  1. > Which method did you use to create the ring buffer?

     

    I haven't gotten as far as actually creating any buffers -- I was just setting up the shell VIs for one writer and two readers.

     

    > Are you saying that merely using the index array primitive destroys the element?

     

    Using Index Array copies the element out of the array. That is *exactly* what made me pause: if at any point you make a copy of the data, you have lost the advantage of this system over the parallel queues. Since you *must* make a copy of the data in order to have two separate processes act on it at the same time, there's no way to get an advantage over the parallel queues mechanism.

     

    If I remember correctly. The Array Subset only keeps track of indexes (sub-array type). Would this avoid the copy?

    Not sure where "parallel" queues came into it. If we were to try parallel queues (I think you are saying a queue for each read process). Then data still has to be copied (within the labview code) on to each queue? Would you not get a copy at the wire junction at least if not on the queues themselves? This scenario is really slow in LabVIEW. I have to use TCPIP repeaters (one of the things I have my eye on for this implementation) and it is a huge bottleneck since to cater for arbitrary numbers of queues, you need to use a for loop to populate the queues and copies are definately made.

     

     

    Their buffer system appears to take advantage of the fact that you can, in a thread-controlled by-reference environment, act on data in-place. That's impossible in a free-thread by-value environment.

     

    I stopped going any further than that until I clarified my understanding since if these thoughts are correct, there's no possible advantage in LabVIEW with this concept. If I've missed something, I'm happy to re-evaluate.

    I think it will be impossible to see the same sort of performance that they achieve without going to compiled code (there is a .NET and C++ implementation) and if our only interest is to benchmark it to the LV queue primitives, we aren't really comparing apples (compiled code implementation of queues in the LV runtime vs labview code). However, the principle still stands and I think it may yield benefits for the aforementioned scenarios (queue-case for example), so I will certainly persevere.

    Of course. It'd be great if NI introduced a set of primitives in the LV kernel (Apache 2.0 licence I believe ;) )

  2. This intrigued me, so I started to put the code together for this to see how it would work. After just a bit of setting up the VIs, I realized that I can't have two consumers walk the same buffer without making copies of the data for each consumer. That made me fairly certain that this ring buffer idea doesn't buy you anything in LabVIEW. In Java/C#/C++, the items in your queue-implemented-as-ring-buffer might be pointers/references, so having a single queue and having multiple readers traverse down it makes sense -- it saves you duplication. But in LV, each of those readers is going to have to copy data out of the buffer in order to leave the buffer in tact for the next reader. That means that you have zero advantage over just having multiple queues, one for each consumer that is going to consume the data. It could save you if you had a queue of data value references, but then you're going to be back to serializing your operations waiting on the DVR to be available.

     

    Am I missing some trick here?

     

    In general, I think it will not realise the performance improvements for the pointer reasons you have stated (we are ultimately constrained by the internal workings of LV, which we cannot circumvent). I'm sure if we tried to implement a queue in native labview, it wouldn't be anywhere near as fast as the primitives. That said... There a lot of the code seems to be designed around ensuring atomicity. For example. In LabVIEW, we can both read and write to a global variable without having to use mutexes (I believe this is why they discuss CAS). LabVIEW handles all that. Maybe there are some aspects of their software (I haven't got around to looking at their source yet) that is redundant due to LabVIEWS machinations........that's a big maybe with a capital "PROBABLY NOT".

     

    I'm not quite sure what you mean about "is going to have to copy data out of the buffer in order to leave the buffer in tact for the next reader".

    Are you saying that merely using the index array primitive destroys the element?

     

    I'm currently stuck at the "back pressure" aspect to the writer as I can't seem to get the logic right.

     

    Assuming I have the logic right (still not sure) then this is one instance when a class beats the pants off of classic labview.

    With a class I can read (2 readers) and write at about 50us, but don't quote me on that as I still don't have confidence in my logic (strange thing is, this slows down if you remove the error case structures to about 1ms  :blink: ). I'm not trying anything complex. Just an array of doubles as the buffer.

     

    DVRs just kill it. Not an option, So it makes classes a bit of a nightmare since you need to share the buffer off-wire. To hack around this, I went for a global variable  to store the buffer (harking back to my old "Data Pool" pattern) and the classes just being accessors (Dpendancy Barrier?) and storing the position (for the reader).

     

    I should just qualify that time claim in that the class VIs are all re-entrant subroutines (using 2009, so no in-place). Not doing this you can multiply by about 100.

     

    Which method did you use to create the ring buffer? I'm currently trying the  size mod 2 with the test for 1 element gap. This is slower than checking for overflow and reset, but easier to read whilst I'm chopping things around.

  3. Interesting topic, and I can't say I anticipate being able to formulate a full reply so I'm just going to lob this one over the fence and see where it lands.

     

    I can't say I'm really convinced it's a matter of "Queues vs Ring Buffers". Queues after all provide a whole synchronization layer. Perhaps arguing "Sequential vs Random Access Buffers" might be more apt?

     

    Also it seems to me the underlying challenge here is how do you contend with multiple consumers which might be operating at different rates. Without delving too deep down the rabbit hole, it appears they've done only a handful of things that distinguish this Disruptor from the traditional producer-consumer pattern we all use know of in LabVIEW. Obviously there are huge differences in implementation, but I'm referring to very high level abstraction.

    1. Replace a sequential buffer with a random access buffer.
    2. Make the buffer shared among multiple consumers.
    3. Decouple the buffer from the synchronization transport.

    We definitely have the tools to do this in LabVIEW. Throw your data model behind some reference-based construct (DVR, SEQ, DB, file system...), then start to think about your favorite messaging architecture not as a means of pushing data to consumers, but as a way of signalling to a consumer how to retrieve data from the model. That is not "Here's some new data, do what you will with it", but rather "Data is ready, here's how to get it". Seems like a pretty typical model-view relationship to me, I've definitely done exactly this many times in LabVIEW.

     

    Of course I may have completely missed the point of their framework. I probably spent more time over the last day thinking about it than I have actually reading the referenced documents...

     

    Well. The title was really to place the discussion in the queue producer/consumer pattern vs a ring buffer producer/consumer. Whilst queues generally are just a buffer behind the scenes (some can be linked lists) there is a domain separation here.

     

    Queues are a "Many to One". Their real benefit is having many produces and a single consumer. In the one producer, one consumer, this is ok, but the example isn't really a one-to-one although we would shoehorns it into one in LabVIEW such that we have one consumer then branch the wire. Additionally. Looking at the classic dequeue with case statement which many messaging architectures are based on including mine. This is mitigating concurrency by enforcing serial execution.

     

    The Disruptor or ring buffer approach is a "One to Many". So it has more in common with Events than queues.Events, however, have a lot of signalling and, in labview, are useless for encapsulation.

     

    I've only breached the surface of the Disruptor pattern. But it doesn't seem to be a "Data Is Ready" approach since its premise is to try and remove signalling to enhance performance. The "Write" free wheels at the speed of the incoming data or until it reaches a piece of data that has not been "consumed". By consumed, I do not mean removed, simply that it has been read and therefore is no longer relevant.

     

    A "Reader" requests a piece of data that is next in the sequence and waits until it receives it. Once received, it then processes it and requests the next. So. If it is up to the latest, it will idle or yield until new data is incoming.

     

    The result seems to be self regulating throughput with back-pressure to the writer and all readers running flat out as long as there is data to process somewhere in the buffer. It also seems to be inherently cooperative towards resource allocation since the fast ones will yield (when they catch up to the writer) allowing more to the slower ones.

     

    Here's a pretty good comparison of the different methods.

     

    There's also a nice latency chart of the Java performance

     

    And finally. Some hand drawn pictures of the basics

  4. Another thought: Some problems the Disruptor pattern solves in Java we don't have in LV, e.g garbage collection issues. It could however be worth trying to implement it in LV because we were able to build a producer/consumer architecture that doesn't suffer from stalling when producers and consumers run at different speeds.

     

    It doesn't solve the garbage collection issues in Java. They "alleviate" that by having custom objects and reboot every 24hrs :D.

     

    The advantage of this technique is that M processes can work on the same data buffer in parallel without waiting for all processes to finish before moving to the next.

     

    As an example.

    Lets say we have a stream of doubles being written to a Queue/buffer of length N. We wish to do  a mean and linear fit (Pt by Pt). We will assume that the linear fit~ 2x slower and that the queue/buffer is full and therefore blocking writes.

     

    With a queue we remove one element, then proceed to do our aforesaid operations (which in LabVIEW we can do in parallel anyway). The queue writer can now add an element. The mean finishes first and then the reader has to wait for the linear fit to finish before it can de-queue the next element. Once the linear fit finishes, we then de-queue the next and start the process again evaluating the mean and linear fit.

     

    From what I gather with this technique the following would happen.

     

    We read the first element and pass it to the mean and linear fit. The mean finishes and then moves on to the next data point (doesn't wait for the linear fit). Once the linear fit has finished, the next value in the buffer can be inserted and it too moves to the next value. At this point the mean is working on element 3 (it is twice as fast)The result is that the mean travels through the buffer ahead of the linear fit (since it is faster) and is not a consideration for reading the next element from the buffer. Additionally (the theory goes) that once the faster process has reached the end of the data, there are more processing cycles available to the linear fit so that *should* decrease its processing time.

     

    Now. They cite that by reading in this fashion, they can parallelise the processing. We already have that capability so I don't think we gain much of a benefit there. But leveraging processing of a function that would spend most of it's time doing nothing due to data being unavailable until the slower process finishes seems like it is worth experimenting with.

  5. I happened to come stumble upon what, was to me, an interesting presentation about high speed transaction processing (LMAX presentation).

     

    Their premise was that queues, which are the standard approach, were not the most appropriate for high throughput due to the pipeline nature of queues. To achieve their requirements, they have approached it from using ring buffers which enable them to parallel process data in the ring buffer, thus alleviating, but not eliminating pipe-lining (if the readers are faster than the writer, they still have to wait for data).

     

    The "classic" producer consumer in LabVIEW heavily relies on queues and, one of the problems we encounter is when the reader is slower than the writer (we are concerning ourselves with a single write only). Because we can only process the data at the head of the queue, we have a similar throughput problem in that we cannot use LabVIEWs parallelism to alleviate the bottleneck. So I was thinking that the alternative design pattern.that they term the Disruptor might be worth discussing even though we are contained by how LabVIEW manages things in the background (it will probably pan out that LabVIEWs queues will out-perform anything we can write in LabVIEW-parallel or not).

     

    Thoughts? (apart from why can't I upload images  :angry: )

    • Like 1
  6. That's what the

     

    Hey guys,

    Bumping this a bit because I'm in a similar scenario. I've developed a distributed app for controlling an instrument. The UI runs on a Windows machine and connects to an LVRT machine via TCP. Up till now, I've driven it manually, pressing all the buttons to test all the functionality. Now I'm trying to actually use it to do experiments and have hit on the idea of writing simple .csv files in plain text which are parsed to generate a list of the commands and as with Paul, I'm struggling a little bit on exactly how to handle the sequencing logic. Specifically, how do I handle the distinction between messages sent as part of a sequence, and the same messages when they're sent by the user just pressing a button?

    I've injected the sequencer at the level of the main message routing loop running in Windows (if Daklu is watching, I've also refactored to use a stepwise routing schema so the Windows routing loop has access to all messages). The sequencer generates an array of commands which is stepped through one by one (in theory).

    So far, I've tried setting a flag "Sequencing?", but that's not really sufficient to deal with the cases where a message may arrive as part of a sequence, but then also arrive when the user pushes the button, resulting in a superfluous call to the sequencer to grab another job. Perhaps I should disable UI interactions which generate the events I'm passing as part of a sequence (i.e. "enable motors")?

    I know the distinction between "Command" and "Request" messaging has been discussed a lot recently, and as I understand it, there doesn't seem to be much in the way of a hard distinction for telling whether a message is a "Command" or a "Request". What I'm wondering is, should I perhaps bump the "Sequencing?" flag, or something similar, down to the level of the individual process? So that the individual processes only send "CommandAcknowledged" when they're sequencing?

    Intuitively, I'm rejecting that idea because what if I want to do things with "CommandAcknowledged" type messages while the instrument is being driven manually?

    What techniques do you guys use to distinguish when a message is sent as part of a command sequence or when it's sent manually? Alternatively, how do you avoid the issue entirely?

     

    Just inspect the SENDER part of the message ;)

  7. Yeah, I'm afraid it might be a Windows and not a LabVIEW problem as well. Afraid because then there's probably nothing I can do about it. The default path in my case likely isn't a network drive, but it could be a USB location. I say "probably" because for my various file dialogs I store the last used path in a feedback node so each dialog starts in the last used location, so I can't say where that dialog was trying to point to initially.

     

    When this happens the first thing I do is rush to another application and see if I can reproduce the issue, but I'm never able to.

     

    Well. It's fairly easy to eliminate if that is a suspected problem. Just put a check path exists and log the path before the dialogue is invoked. Then, if you see the dialogue again, you will be able to see whatf the path was and whether the app thinks it exists. You can them paste it into a browse window and see if explorer complains.

  8. What is the default path or current path? Is it pointing to a networked drive? Are all drives actually available that are mapped?

    I've seen in windows (not specifically LabVIEW) that file dialogues can be "flakey" when network locations are invoked on drives that are unavailable or NAS locations that are asleep. Basically any network access failure will bring windows to it's knees and is usually facilitated by a file action (browse or open/save). When a file dialogue is invoked, it will try to enumerate drives and you only need one to not respond.

     

    If you look carefully, you will see that all the indicators are there, they are just not populated (they appear as "discolourations")-classic symptom of struggling to enumerate the drive list. You may find something in the windows event log.

  9. I think it's illegal to export a LabVIEW built exe to Iran or North Korea...

     

    That'll teach them.

     

    Ton

     

    Not from Europe (not sure about NK; cannot find any relevant EU sanctions) . Sanctions to Iran basically cover finance, transport and energy although if you had a "nuclear processing plant.vi", they might group that under "energy". LabVIEW is, apparently, also available in Iran, so I wouldn't be too quick to jump to conclusions about the OPs country of origin. Maybe it just cannot be bought online from wherever he resides.

     

    Congratulations! Lavag.org made it on the spooks radar. Three more keywords and they break your door down :D

    • Like 1
  10. For the record, there's a truly fascinating design that I've seen implemented.

     

    1. Make class Message Handler as a *by value* object. In other words, it has methods for "Handle Message X" and "Handle Message Y", and all of its data is within itself.

    2. Put that class in a shift register of a message handling loop. Dequeue a message, figure out the type of the message, call the right method (how you figure out the message type -- strings, enums, dynamic dispatching, variants, finger in the wind -- doesn't matter).

    3. Some messages tell the message handler to "change state". Let's say message X is such a message. In the "Handle Message X" method, instead of changing some internal variable, output an entirely new Message Handler object of a completely different Message Handler child class. That now goes around on the shift register. All future messages will dispatch to the new set of methods. Any relevant data from the old object must be copied/swapped into the new object.

     

    This eliminates the case structure in the message handling of "if I am this state, then do this action" entirely.

     

    Because we made the output terminal NOT be a dynamic dispatch output, you can implement this in the Actor Framework and switch out which Actor is in your loop while the actor is running.

     

    This is known in CS circles as the State Pattern.

     

    Elimination of the case structure isn't that important. The encapsulation of the state-machine is the important bit which is what you have achieved but It is only the equivalent of "create Sub VI" on the case. From your description, you are still "driving" the machine with external, module specific messages which causes you to require the application state-machine (execution engine or sequencer) to know what module specific messages to send and in which order. That's a dependency that I don't like.

     

    So. Keep the class and keep the case structure and hard-code the "Message X" and "Message Y" in a couple of frames (probably multiple hard-coded messages in one frame)  and we are  back to the API. The execution engine only has to worry about application stuff and not what language the state-machines speaks (I just feel that if the messages are hard-coded, then there isn't any point to them).

     

    At that point, you can rationalise the interfaces to the modules (same messages) and swap entire modules in and out with the same execution engine (same as switching out the actor, I suppose) OR swap the execution engine with another. The end result of breaking message interdependence is is that you get swappable modules, swappable execution engines and swappable user interfaces and whole sections of a project become reusable (which is why I'd love to see the project manager cope with nested projects).

    • Like 1
  11. Ahh... so if a message is handled the same way across multiple states you combine several states into a single frame.

     

    No. And I think this is where we view things fundamentally differently perhaps. In my world, messages have nothing to do with states. Well. That's not strictly true since there may be one or two out of 100 that you could identify as having a particular state. Perhaps it's better to say there is no inherent link between them. Messages are just commands and the handler is just the API interface to the state-machine - the contoller. There is no different messaging strategy "just" for state-machines where messages map to discrete states. States are handled by a separate process (the state-machine). In this world, the rest of the software doesn't care about all the internal states of the module let alone be responsible for driving them. Many of the messages I send to a state-machine are things like INIT, START, STOP, PAUSE, CONTINUE etc

    But if you change your terminology and say......

    "Ahh... so if a message is handled the same way across multiple messages, you combine several messages into a single frame.

    Then yes. That was what I was stating in comparison to events.

  12. Funny.  I could use every one of those arguments as reasons why I prefer message handlers inside states.  (Perhaps with the exception of #2, 'cause I'm not sure what you mean.)
    Really? Hmmm. So as an example, you would use my #5 argument to suggest that requiring to be in a certain state to be able to dequeue a message at all, means that messaging and state have been separated and are not dependent?

    #2 is just adding multiple strings to a frame so it behaves similarly to adding multiple control events to a single event structure frame (same code, different messages).

  13. Actually I was thinking about using vi names for target ID in combination with onion routing.  There would be too many long dependency chains for me to have confidence I'm making all the corrections that are necessary.

     

    You only really need to onion route when you traverse a boundary (like forwarding across TCPIP) so message depth is usually limited to 2 (arguably it could be considered chaining at that point). Everything else you send directly. If you were to onion route (i.e. concatenate) to twenty levels, then I can see that may be an issue.I just don't see the point if I can send directly. What's your use case?

  14. Well. In context of my comments on the other thread. The message handler would always be the first example and all the state info and logic would be pushed down in the hierarchy.into subvis/classes (in your examples just "create subvifrom selection" and add the while loop memory for the state to it).

     

    A few of the arguments for #1 that I would put forward (not any particular order)

    1. it is cleaner and easier to read.(subjective I know, but the frame represents messages only rather than a  message and a state)

    2. You can group messages in the frames (like you can with events) independently of state.

    3. You can guarantee that a "dequeue" happens for every message regardless of state.

    4. Less code replication.

    5. Separation of responsibilities (bugs in the statemachine are separate from bugs in the message handler).

  15. Thanks for reply. Actually I need to find out what a real value I should pass via the parameter LVRefNum *VISAIN. Let's say If I need to tell the DLL using Com Port 1, what should be in my calling variable that DLL understands: 1 or COM1 or "COM1" or "1" or anything else ? I don't create this DLL, I don't program Labview. I only get a header file from Labiew project with calling parameters knowing that this is a variable for a Com Port.

     

    Thanks.

     

    You need to call an initialising function first and it will return the reference that you can then pass to your other functions.

     

    Using the VISA example again, there is a viOpen function

     

    C Syntax

    ViStatus viOpen(ViSession sesn, ViRsrc rsrcName, ViAccessMode accessMode, ViUInt32 openTimeout, ViPSession vi)

    Visual Basic Syntax

    viOpen&(ByVal sesn&, ByVal rsrcName$, ByVal accessMode&,ByVal openTimeout&, vi&)

     

     

    It takes as an argument a string which is the resource name - rsrcName$ -  (e.g. "Com1"). and returns the session value which is the vi&.

     

    It is the vi& that is passed to the other functions.

     

    Your DLL should have a similar function that takes a resource name string and passes back a LVRefNum which you can then pass into your  other function calls.

  16. @ShaunR Just to clarify, when you talk about having a stateful function, what mechanism are you proposing to maintain this state? Is the state encapsulated via something like an FGV? 

     

    Could be. Could also be a class, or action engine-depends on your level of abstraction. The goal is to have one VI per frame that is not dependent on the message order and is atomic. For a trivial example. You could have a JKI style statemachine set of messages for opening, reading and closing a file and have to pass the reference from frame-to-frame. I'm suggesting this is a bad idea and you should have a one read message and it invokes a READ vi which opens reads and closes the file.

  17. Yeah, stateless functions do make using it much easier.  I noticed how your TCP vi opens a connection every time a message needed to be sent instead of having an open and closed state.  What the overhead associated with that?  How frequently would you have to send messages before you would consider having a stateful TCP vi?

     

    (Of course, the tradeoff of having stateless functions is that you have to pass *everything* as arguments, which in this case tends to pull the code towards onioning and structural dependencies.)

     

    Don't get me wrong. you can have stateful FUNCTIONS. I'm saying that the HANDLER should not be stateful. Sending  a message should execute a function every time rather than conditionally executing if a message was received earlier....... there be monsters ;) Nothing to stop you having a class in a frame that maintains its state but it should be encapsulated within the scope of the function, not the handler or, even better, just don't send a message at all (it depends what state we are talking about- file open, read,close........bury it in the function. Move slide if position is greater than 3mm, thats a sequencers job and it shouldn't send "move").

     

    The TCPIP code I placed was just a quick and dirty way of sending commands. From experience it is quite adequate for the local loop at medium rates (10s of ms) or for an internal  GB network of hundreds of ms, but forget it for internet or high speed comms. It is also useful if you are talking to many destinations with short packets infrequently (e.g. 50 cRIOs with  "RESTART"). It is up to the developer to define the functions and features, the handler just provides a messaging interface.

     

    I think I said earlier that in the real world I use a comms handler and can send via a number of methods. That module actually maintains connections (whatever the interface) but I have made it function so that they die if not used for a couple of minutes so if they are used often they persist, but I don't have to explicitly close them. You can do whatever you like in the handlers, just KISS

     

    As long as the strings are relatively simple like that I could probably handle it.  There were a few things that left me scratching my head until I traced through the code several times:

    [*]UI.vi claims the message format is TARGET->CMD->PAYLOAD, but the Msg Send function actually converts it to SENDER->CMD->PAYLOAD.  Kind of a loose interpretation of the word "target," huh?  ;)

    [*]For the longest time I assumed the Msg Send name terminal was for the target's name instead of the sender's name.  (Using the vi name to identify named queues for the message targets would never work for me; I rename stuff all the time during active development and that would wreck havoc on my code.)

    [*]Mostly due to number 2, I couldn't figure out how you were getting messages from the TCP listening loop to the TCP display loop

    1. Actually. The generic symantics are TARGET->SENDER->CMD->PAYLOAD. So no. It's not loose. Target and sender are purely for routing. It could have been Destination IP address and Source IP address but for this implementation I chose VI names and relieved the burden on the user of typing in the sender on every message. Additionally, you don't have to use queues. It's just convenient that queues can be isolated by names so the target is consumed for that purpose. Queues also break the need for wires running all over the place, or shared storage (which is required for events). You could have used TCPIP primitives in the Send instead of Queues and it would have worked much the same way directly over a network (which is pretty much how dispatcher in the CR works). In this respect it is a messaging strategy realised with queues and handlers. But the messaging itself would be identical with TCPIP listeners and opens. This is what makes it easy to traverse boundaries both in hardware and in software languages since it could be a webserver written in python at the other end (not easy to do with labview coded messages).

     

    2. Well. You can't beat discipline into programmers. Only the army can do that :) But you have perhaps missed the "reuse" aspect. Once you have a TCPIP handler, for example, then why do you need to rename it? It's like renaming Lapdog. Besides renaming is just a search and replace string on all VIs. Not as if you have to go and rename a shedload of VIs,and have to re-link all over the place is it?

     

    3. I don't understand the question. There is no TCPIP display loop. There is a TCPIP handler which happens to be showing its panel. And there is an IMG handler. The listener just forwards the message using Send.

  18. I'm no expert in VB but looking at the VISA programming manual, references are usually passed as

     

    Byval Value& (notice the ampersand)

     

    e.g. 

     

    C Syntax:
    ViStatus viGpibSendIFC(ViSession vi)


    Visual Basic Syntax:
    viGpibSendIFC&(ByVal vi&)

     

    The C pointer (asterisk) makes me tentative, however, since it could be a ByRef. But it is unusual.

    Hopefully Rolf will be along to answer definitely (he's the guru on this kind of stuff, I tend to work the other way around - Labview->C)

  19. I was poking through some old code on Friday that had a case structure of 100+ cases and it became really annoying to be working in a case, flip to another one to check what it was doing, then going back to the original case.  I would love to have a return to previous case function (How about ctrl+shift mousewheel to flip between the two most recent cases?)

     

    I looked in the VI Scripting nodes, but it looks like CaseSel only holds the current visible case and doesn't track previous ones.  I gave about 90 seconds of brainpower to thinking about writing some sort of tracker that runs in parallel, but I'm not at all experienced with VI Scripting and it made my brain hurt to try to track each and every case in every open VI.  If anyone knows a way to do this easily, let me know.  If not, maybe I'll petition the Labview Gods to add a previousVisibleCase property to CaseSel objects.  With something like that, I think it would be pretty simple to write an RCF plugin or a ctrl+space shortcut.

     

    Mike

    Not exactly what you are asking for but it may be suitable even if not perfect (case select-it uses the JKI plugin framework). Alternatively, it would be a very good starting point for your own framework plugin.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.