Jump to content

Unwanted Loop Delay


Recommended Posts

This may have come up before, but I couldn't find it with the search function.

When a loop executes, is it normal for it to have a delay every once in awhile? The example I've attached should show what I'm talking about.

In an extremely simple for loop (only checking the time each iteration to measure iteration length), the length of an iteration should be, at the most, a microsecond or two. What I'm running into, however, is that every 10,000 (more frequent if I throw a delay in the loop) iterations or so, one iteration will take much, much longer than it should (e.g. 15ms). Is this normal and is there anything I can do about it?

To stave off people who are going to say "who cares," let me explain why it is a problem in my main application. If anyone can point out a flaw (probably pretty easy actually :rolleyes: ) in my architecture, then please, do it so that I can fix things. Basically, my program works by polling a variety of conditions (some conditions require hardware checks, others are purely software) and taking an action if the condition has been met. The actions that have to be taken need very precise temporal resolution, and when a 15ms delay pops up, it can throw off the timing of the action.

Additionally, I have tried putting in both a 0ms and >0ms delays, and neither remedies the solution. If the lag only occured every 10,000 iterations when using a 1ms delay, I might be ok, but since the frequency goes up as I introduce a delay, it is no go.

Thanks for the help in advance!

Link to comment

The 15 or 16 ms thing is something of Windows if I recall correctly.

Anything timing beyond this speed can't be measured that way.

A timed loop should get you where you want.

But a timing schedule of microseconds (and measuring it correctly)... Maybe a timed loop clocked against a real clock.

Ton

Link to comment

Yes, while your CPU can perform operations very fast, Windows is a standard OS and as such does not gurantee timing. It can take the CPU from you and cause LV to wait every once in a while for its own purposes. See here and the posts it links to for some more details.

Some options:

  • Use a real time OS with LabVIEW RT (see crelf's blog).
  • Increase the priority of your process and thread both in Windows and in LV.
  • If you have a DAQ card with a hardware timer, you can use its triggering to get more accurate timing.

Link to comment

QUOTE(yen @ Jul 27 2007, 07:54 AM)

I will try to increase my thread priority and see if that helps. Having never messed with priorities in LabVIEW, how does one go about doing this?

Additionally, can you point me in the right direction to see some examples of using triggering to get better timing from a DAQ card?

Also, I've tried playing around with timed loops, but have run into some problems. The first issue is that I cannot wire anything to any of the inputs. When I try to wire something to them I get the error "Input Node: This node is not executable". When I look at the examples with timed loops, they have inputs wired and work correctly. If I simply disconnect and then reconnect one of those inputs, however, then I get the same error. The second issue is that even when I use a timed loop (without relying on a hardware timer), I get the same sorts of delays added. Ideally with a one milli-second period, I would have delays of 1ms for each iteration, but I still get the 15 and 16ms delays, plus combinations of them (31, 46, etc).

Sorry for the ignorance that I'm sure is coming through in my posts, but thank you all for your help.

QUOTE(tcplomp @ Jul 27 2007, 12:25 AM)

Please see my reply to Yen below. I'm having some problems with timed loops.

QUOTE(Karissap @ Jul 26 2007, 10:17 PM)

I got the same value of 0.015625 seconds for every loop execution when I ran your test. There are a whole bunch of things which can improve vi timing. Something like a Timed Loop sounds like it would be better for your application. Try searching for timed-loop in the example finder.

I've played around with Timed loops, but when I do, they don't work correctly. First, they still have the same delay as when I use a normal loop. Second, I can't wire anything to the inputs without getting the error "Input Node: This node is not executable". I can load examples with wires going to the nodes inputs and it works correctly, but if I diconnect one of those wires and then reconnect it, I get an error.

Link to comment

I haven't really played with the thread priorities too much either, but you can set a thread and priority for each VI in File>>VI Settings>>Execution.

I'm not sure that this would help, because I think what you're seeing comes from Windows itself which takes the CPU from LabVIEW, so maybe increasing the process LabVIEW process priority in Windows will help, but even then you're not guranteed to have your timing.

As for the DAQ timing thing, I suggest you search Kevin Price's posts, because I think he was the one who refered to it. Basically, the concept was that you set a task to perform at a known rate (let's say every 50 microsecond) and then you get a trigger in LV every 50 microseconds.

Link to comment

Just a note:

I suprised myself when I discovered that I could run multiple timed loops (hardware timed) at 1000 Hz without missing a beat under Windows.

I did not have to resort to setting VI priorities but I did have to tweak Windows by making sure background processes got priority, shutdown virus stuff, no network, no File Indexing etc.

Ben

Link to comment

QUOTE(yen @ Jul 27 2007, 08:54 AM)

Yes, while your CPU can perform operations very fast, Windows is a standard OS and as such does not gurantee timing. It can take the CPU from you and cause LV to wait every once in a while for its own purposes.

While Windows cannot gurantee timing, I was wondering if this is the same on Linux systems?

Link to comment

QUOTE(ooth @ Jul 29 2007, 10:12 PM)

While Windows cannot gurantee timing, I was wondering if this is the same on Linux systems?

Yup. Linux might have different scheduling behavior than Windows, but it's not a Real-Time OS. At any time it could take the CPU away from LabVIEW and do whatever it sees necessary.

[There is development on a Linux RT-type system, but I don't know much about that. Your standard Linux desktop, though, isn't RT.]

Operating Systems like Windows treat priorty settings for user code as requests, not commands. You request a priority, and all things equal Windows will do its best to abide by that request. But it makes no promises.

I heard somewhere that one of the highest priority processes in the system at all times is the code that keeps track of the mouse and updates the cursor. Microsoft was sick of people calling tech support saying their computer was frozen with Windows 95 and before, when in fact there were just higher priority tasks executing than dealing with the mouse. Now you can move your mouse around all you like when your computer is hanging and feel all warm and fuzzy inside. Really, there's no difference :)

Link to comment

QUOTE(ragglefrock @ Jul 30 2007, 02:36 PM)

I heard somewhere that one of the highest priority processes in the system at all times is the code that keeps track of the mouse and updates the cursor...

Yeah - there was a joke going around a few years ago that if only I could get all of my apps to run in the mouse thread, then nothing would ever crash :D

Link to comment

QUOTE(ragglefrock @ Jul 30 2007, 07:36 AM)

Now you can move your mouse around all you like when your computer is hanging and feel all warm and fuzzy inside.

It actually does work for me. Being able to move the mouse gives you the feeling that the computer is still responding and is not completely stuck. Not that it really helps if it can't do anything else. :angry:

Link to comment

Getting back to the original question...

I think the very first reply from Mikael gives the answer. The explanation *why* is that the Timestamp function used in the original posted code only has a time resolution of 15.6 msec. It's simply quantization error. One call occurs at, say, (X).9998 quanta and the next call occurs at (X+1).0003 quanta. The reported time difference isn't because the execution actually took longer, it's because the measurement got quantized.

The msec timer has a resolution of 1 msec which reduces the quantization error considerably.

The rest of the story is that Windows will also occasionally give you a *real* delay that is often in the 10's of msec because it decided to do something other than service your LabVIEW app. You can't count on avoiding these delays, but the msec timer or a Timed Loop at least give you some ability to detect and measure them.

-Kevin P.

Link to comment

QUOTE(Kevin P @ Aug 1 2007, 11:48 AM)

I think the very first reply from Mikael gives the answer. The explanation *why* is that the Timestamp function used in the original posted code only has a time resolution of 15.6 msec. It's simply quantization error. One call occurs at, say, (X).9998 quanta and the next call occurs at (X+1).0003 quanta. The reported time difference isn't because the execution actually took longer, it's because the measurement got quantized.

The Labview Profiler usually has maximum execution times of 15ms, even for very fast VIs. I assume these things are related.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.