Jump to content

Best Architecture for Time-Based Events?


Recommended Posts

I I have recently been reviewing design patterns. During this, I came up with a use case that requires high precision (< 1 msec) event triggering.

A sample use case would be to create an alarm application that would show a user on-screen messages or log messages to file at a specific relative time. Instead using absolute time as a time base, I thought that relative time should be used, as in a stopwatch. Alarm times could be entered by a user programatically. Furthermore, the user should be able to pause and resume the stopwatch.

Is there any type of architecture beyond a while loop running every 1 msec that would be appropriate? Is there a way to code this without polling?

Just pondering, please share your thoughts!

Link to comment

Hello,

I like using change detection events

post-8614-1231178816.jpg?width=400

You can incorporate this into an event loop. I usually use a producer/consumer archtecture with the events being the producer and a queued state machine as the consumer.

You could set up a counter timer channel to send out a pulse every millisecond and wire that into a change detection digital input.

Dan

Link to comment

QUOTE (ASTDan)

I like using change detection events

You can incorporate this into an event loop. I usually use a producer/consumer archtecture with the events being the producer and a queued state machine as the consumer.

You could set up a counter timer channel to send out a pulse every millisecond and wire that into a change detection digital input.

Thanks for the reply Dan. That would be a great solution for hardware triggering, but the use case I was considering would not involve DAQ hardware. Any ideas for this?

Link to comment

QUOTE (brianafischer @ Jan 5 2009, 09:23 AM)

I I have recently been reviewing design patterns. During this, I came up with a use case that requires high precision (< 1 msec) event triggering.

A sample use case would be to create an alarm application that would show a user on-screen messages or log messages to file at a specific relative time. Instead using absolute time as a time base, I thought that relative time should be used, as in a stopwatch. Alarm times could be entered by a user programatically. Furthermore, the user should be able to pause and resume the stopwatch.

Is there any type of architecture beyond a while loop running every 1 msec that would be appropriate? Is there a way to code this without polling?

Just pondering, please share your thoughts!

You seem to have two questions, one is about <1ms timing, which is hard but not impossible. How are you going to measure that time? That should dictate at least part of your design. I don't think either of your use cases bear any relation to sub-millisecond timing.

The other question is whether a stopwatch-like app should use relative time or absolute time. I would vote for absolute time because your timing code may have bugs (which you will eventually fix, of course) whereas the OS time is generally correct already.

Link to comment

To get <1ms resolution you'll probably have to end up polling. You could use a Timed-Loop with a MHz clock, if you're running on RT. This loop would constitute your timer, and its period would be set to the relative alarm time in microseconds. Every time it executes, it could signal via an RT FIFO that the event occurred. This avoids polling and gives great timing.

The problem comes when you want to let the user adjust the timing and pause the timer. To do this, you have to get a message to the Timed Loop. That's easy enough with another FIFO, but the problem is that the loop only executes on its period. So if you send a pause command, the loop has to wait for the next event before processing the command. That doesn't seem to fit your requirements. This becomes especially noticeable when the events get far apart. For instance, if your alarm rate is every 5 seconds, then you'd have to wait up to 5 seconds to be able to pause or adjust the alarm.

Link to comment

The problem with absolute time is that at least in Windows the timer has a ~16 ms resolution.

In Windows, you can get sub-ms resolution by calling the performance counter API functions and there should be some examples VIs in the forums. You should be able to do something similar in other OSes.

I didn't understand your desired architecture exactly, but it sounds to me like the most practical solution would be using a timeout mechanism. You can do this with an event structure's timeout event or with any of the synchronization primitives. This makes LabVIEW wait the desired time and allows you to break the wait if needed.

Link to comment

QUOTE

You seem to have two questions, one is about <1ms timing, which is hard but not impossible. How are you going to measure that time? That should dictate at least part of your design. I don't think either of your use cases bear any relation to sub-millisecond timing.

The use case is a simple example that I created, not a real-world implementation. The main design question is how to obtain an accurate sub-msec timing in a sequencer where events may be scheduled at varying times without an extremely resource-intensive polling loop.

QUOTE

The other question is whether a stopwatch-like app should use relative time or absolute time. I would vote for absolute time because your timing code may have bugs (which you will eventually fix, of course) whereas the OS time is generally correct already.

As far as the relative vs absolute time, I was attempting to implement a "pause" condition which meant the alarms could be delayed. This would mean the alarm times could be delayed while the sequencer is active.

Link to comment

QUOTE (Yair @ Jan 5 2009, 01:17 PM)

The problem with absolute time is that at least in Windows the timer has a ~16 ms resolution.

In Windows, you can get sub-ms resolution by calling the performance counter API functions and there should be some examples VIs in the forums. You should be able to do something similar in other OSes.

I didn't understand your desired architecture exactly, but it sounds to me like the most practical solution would be using a timeout mechanism. You can do this with an event structure's timeout event or with any of the synchronization primitives. This makes LabVIEW wait the desired time and allows you to break the wait if needed.

The Timeout architecture would be perfect and easy to implement, but I don't know of any APIs that give microsecond timeout resolution.

Link to comment

QUOTE (brianafischer @ Jan 5 2009, 01:45 PM)

The use case is a simple example that I created, not a real-world implementation. The main design question is how to obtain an accurate sub-msec timing in a sequencer where events may be scheduled at varying times without an extremely resource-intensive polling loop.

The problem is that in the real world, the Windows/Mac/Linux OSes don't guarantee response time (that is, they are not "deterministic"). If you use some LV timeout operation and ask for 10msec, you'll *probably* get control back in about 10ms (unless Windows starts re-indexing your hard drive, or someone opens Excel, or...). Even if you could ask for 10usec response, the system's ability to return to your task in that time is a crapshoot.

This problem can be solved with LabVIEW Real-Time, and the special hardware needed to run it, but that toolset is kind of spendy. The next step down is to use the LabVIEW Timed Loop and a DAQ card with a counter/timer, but you are still at the mercy of the OS.

The lack of determinisim is why it's useful to use the OS System clock for your timing. If your timeout is continually late (it will never be early) then the errors will accumulate. But if you constantly check the time of day or the millisecond tick count, then you can correct for this.

On the other hand, polling always gets a bad name, but your polling loop may not be totally resource-intensive. Don't forget that the CPU is usually running NOOPs most of the time anyway, or else the OS is doing low-level polling operations. If your task is low-priority, polling is probably not a big deal, unless you are trying to do power management. Of course if your task is low priority, you won't get the deterministic response you crave. The other issue is figuring out what to poll to measure time. You might be able to poll the Program Counter of your CPU (which counts clock cycles) but I don't remember how to do that.

Link to comment

The O/S timing resolution is certainly a problem for submillisecond timing. As far as not polling goes, I actually have done this using shared variable events. I execute a timer method in its own thread (and this in turn calls the Elapsed Time express VI), passing it the urls of shared variables (timeHasElapsed, timeTarget, stopTimer) for the particular timed process. I register for events on the timeHasElapsed shared variable to trigger responses in the application. I previously talked about this here: Newbie-needs-help-with-timer-control.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.