Jump to content

ShaunR

Members
  • Posts

    4,855
  • Joined

  • Days Won

    293

Posts posted by ShaunR

  1. Windows update ran last night while I was running an experiment, and restarted the computer while LabVIEW was running. If you run overnight experiments with LabVIEW, you may want to consider this. I found some instructions for how to disable automatic restart on XP

    http://www.aviranspl...windows-update/

    Here's the skinny:

    Windows XP Pro users can tell Windows never to restart the computer automatically. In order to do that follow these steps.

    • In the
      Start Menu
      go to
      Run
      type “
      gpedit.msc
      ” and press
      Enter
    • Now a Group Policy editor will open. In this window navigate to:
      Computer Configuration -> Administrative Template -> Windows Components -> Windows Update
    • Double click on
      No auto-restart for scheduled Automatic Updates installations
    • In the settings window Choose
      Enabled
      and click
      OK
    • Close Group Policy Editor

    Hope this helps someone else! Or if there are other/better suggestions for keeping Windows Update at bay, I'd love to hear them.

    Pat

    You can also pull the network plug out while your running tests :)

    • Like 1
  2. I agree with everything you said except this. I believe the user event queue exists at the event structure, not the the user event refnum or event registration refnum. If there are no registered event structures, there is no queue to fill up.

    Ooooh. I didn't realise this. I knew windows event queues can be swamped with messages if there is no sink and if you can get enough in before the message managerr times them out. I assumed (obviously wrongly) that it worked in a similar fashion

    Since the user event refnums and event registration refnums are strongly typed, you can only put them in an array if they have the same data type. What's the recommended technique for dynamically registering/unregistering for events that have different data types?

    Thinking about this. If you are going to supply an event refnum from the caller (I prefer encapsulated but what the hell), the Xcontrol can bundle its events onto it. Then when you wire the resultant event cluster through a register to an Event case, you can choose not only the calling vis events, but the Xcontrols as well. It would give the Xcontrol tight integration to the callers events and a fairly close approximation to how real other event driven langueages operate.

  3. If you notice. After the initial flurry, reputations are increasing at a massively slower rate. I put this down to the fact that people (like me) didn't know what that little green number meant and kept clicking on it to see what happened. It took me ages to figure out that it was for the reputation system and not in fact, to add the post to a multiple reply :P

    I also think its a bit misleading, since those that help newbies are unlikely to get better reputations since a newbie generally doesn't realise a) they can vote, or b) what it is for. However, those in the "old boys network" will get tremendous reputations.

  4. I finally got it working. It's looks a little 'hacky' and it's most likely not the most efficient way to do it. But it's good enough for me atm.

    exstring.png

    Thanks again Shaun and Ton :)

    Cool. Glad its working Looks like you've been through the string pallet and now know every function :P.

    (You can replace everything up to the "Array to Cluster" with a "Spreadsheet String To Array")

  5. I have a labmate. He just finished B.S.; and thinks himself is a smart person :nono: . He is lazy (Only see him 12hrs a week.) . The things I teach him need to teach him again and again (e.g. sensors are needed to connect gounds; how PCI board connection. I even need to download the same data sheet again and again for him). 10 questions a day. Make a only 5 transistors circuit take a week and a few hours of my time to do problem solve for him :throwpc: . OK! fine (I am just too nice)

    My boss may want to motivate him, so tell him that his project is very important. The stupid labmate came to talk to me and tell me that my project is not the lab direction; and don't work too hard :frusty: . I am just thinking to take over his small project after I finish up my stuff.

    Nope. If you do that, his project will succeed and he will get the credit. He might even get a promotion since, after all, he may not be technically bright, but he sure can manage YOU! He'll end up as your manager :P Let his project struggle then when the boss is in dispair offer advice to the boss on how to bring it back on course. Then the boss will probably GIVE you the project. Your labmate will have demonstrated his incompetence, you will have shown your helpfullness and he will end up working for you :)

  6. Option B has another Pro, in that it's more consistent with standard events, where you create the event explicitly in the caller VI. I don't have any experience with either method, so I can't comment more.

    Interesting. I view it the other way in that since it is a broadcast mechanism it doesn't matter where you create it as it is the reposibility of the receiver to register for it.

    Here is a XControl design question:

    If I want to use user events to generate events in the owning VI where should these events be created? (not generated)

    Option A

    Inside the XControl and with a read-only property a VI could get the user event and register for it.

    Pro: you have a tight integration between the control and the event. Using the Init and Uninit abilities you can exactly define when to create and destroy the event references

    - you can have multiple listeners

    Con: you can create useless event references which get very much events which are never written.

    Option B

    Inside the owning VI, with a special 'Create Event Reference' VI and a write-only property.

    Pro: only event references that are really used are created

    Con: a special event typedef is needed.

    After writing these things up I lean toward option A.

    I only have used option B, does anyone have some experience with option A?

    Ton

    I would consider A more in line with event implementations not only in LV but in other languages too (for example in Delphi, control events are only available once you place that control on the FP).

    In B, what if you create an event that doesn't exist in the XControl? B also requires external initialisers where as A is self contained and the events are available just by placing the control. However, in the case of A I think you need a backup plan for if you generate events and there are no registered recipients since you will just fill up the message queue.

  7. True, but it's good stuff to know.

    So, just to beat a dead horse:

    If I have the reentrant subVI set to Same as Caller, then each instance will be in the thread of the caller and will therefore not require a context switch when called?

    BTW, just bought the book.

    Only if set to clone! Oh. And if Labview decided to do a context switch because its run out of threads in that execution system :rolleyes: The downside is all the dataspace allocated for each clone.

    You've bought it? No printer then ...lol.

  8. Thanks,

    Looks like I ought to get a copy of that book.

    I still don't think that addresses the topic of reentrant vi's, at least not that I saw. I think the crux of my question was whether making a VI reentrant somehow overrides the execution system setting.

    We're already told to use reentrant VI's when reusing the same VI in two parallel processes. If the answer to the previous question is "no", then do we need to avoid calling reentrant VI's from different threads? Or am I overthinking?

    Not at all.

    Marking a vi as re-entrant means that a full copy of the "executing" code is instantiated in the calling process. This is for both types of re-entrant vi ("clone" and "same copy"). Since copies of the code exisit in the calling processes address space they can be run in parallel. The difference between "clone" and "same copy" is the datapsace. A re-entrant vi marked a "clone" has its own dataspace for every instance you lay down in a diagram. If marked as "same" then all the instancies have only 1 dataspace shared between them. In the case of a LV2 global. If you mark it as "clone" then if you call it from one location it will not contain the same data as if you call it from another, therefore it will only function as required when marked as "Same". However, in doing this you may cross execution system (ES) boundaries if the calling vis are in separate ones (if it is set to "same as caller") or you give it its own ES. And crossing an ES WILL cause a context switch.

    By the way, this is all a bit moot if it doesn't resolve you problem :P

  9. So, that leads me to a question: Do reentrant VI's that are not explicitly set to use the caller's thread involve context switches? Put another way, if a reentrant VI is set to use the caller's thread, does it avoid a context switch?

    Gary

    Have a read of this....

    http://books.google....itching&f=false

    In particular 9.2.3 and (replace the word "Process" with "Execution System") and ask me again.

    • Like 1
  10. Thanks Shaun,

    The LV2 is set at subroutine priority and is therefore same as caller. I see how that would lead to context switching, but what does that actually mean? More overhead associated with calls to the LV2, so each caller spends a little bit more time waiting for it to become available?

    I can certainly try replacing the LV2 with a queue - that's pretty quick edit. I feel like we tried that a while back, but that was before the existence of VI C.

    The LV2 is set up as a lossy buffer, with an indicator telling us when it starts to drop data. I'd have to use a lossy queue, with another single element queue to pass the overflow status.

    Its not so much waiting for it to become available, it more to do with the CPU having to save state information between switching from the global in one context or another. Have a Google for "context switch" its a big subject. But suffice to say, the least, the better.

    With what I have described above, you will never lose data. But the downside of that is that if vi B is producing faster than you are consuming then your queues will grow until you run out of memory. If this is a possibility (and undesirable) then all you need to do is "pause" vi B populating one or both of the queues when the queues are full (fixed length queues) and resume when A or B have caught up. Or (as you rightly say) use a lossy queue. The choice is really if you require sequential losses or random losses. But he above will enable you to easily change how you manage your processes with minimum effort and run most effeciently.

    • Like 1
  11. Gurus,

    I'm hoping someone here has some sage advice. Here's the situation:

    Running on a Core2 Duo (i.e. dual core processor)

    • VI A dynamically launches VI's B and C.
    • VI B runs in a continuous loop and manages a DMA from a 3rd-party DSP board. It puts a subset of the data into Named Queue 1 and all of the data into a LV2 global.
    • VI A reads the data from the LV2 global, processes it, and puts the results in Named Queue 2 in Loop1. Loop2 flushes Named Queue 2 and transmits the contents via TCP/IP on physical port 1.
    • VI C runs in a continuous loop, flushes Named Queue 1 and transmits the contents via TCP/IP on physical port 2.

    I am monitoring the intervals between VI C's outputs. Ideally, VI C should be putting out data every 15ms, regardless of what VI A is doing. In order to try to ensure this, I have done the following:
    • All shared VIs are reentrant
    • VIs A, B, and C are all assigned to different Execution Systems
    • VI's B and C are run without opening their front panel.

    I've observed that when I run things with a typical processing load, the time delta between VI C's outputs gets very noisy, with spikes up in the 100s of ms. I can confirm that the issue is not related to the input traffic by disabling the processing stage of VI A. When I do this, I do see VI C's output every 15ms +/- a couple ms. I see the same thing with the processing enabled if I give it a very small processing load (leading to a very low output load).

    To me, that points to two possible causes: 1) VI A's processing is monopolizing the system, preventing VI C's loop from running as often as I would like it to. 2) The fact that VI C and VI A are both using TCP/IP writes, although to different ports, is causing some sort of blocking.

    Slowing down Loop1 in VI A considerably is not an option.

    Any thoughts? Theoretically, What's going on in VI A should not affect timing of dataflow between VI's B and C, but that's what I'm seeing. Does anyone have any tricks they care to share?

    Thanks,

    Gary

    What execution system is the LV2 global assigned to (same as caller???). There's probably a lot of context switching going on since you cannot encapsulate the global in a single execution system. You basically have a one to many architecture and I would partition it slightly different to take advantage of the execution systems.

    • VI A dynamically launches VI's B and C.
    • VI B runs in a continuous loop and manages a DMA from a 3rd-party DSP board. It puts ALL of the data into Named Queue 1 and Named Queue 2.
    • VI C runs in a continuous loop, flushes Named Queue 1, Extracts the bits it needs, then and transmits the contents via TCP/IP on physical port 2.
    • VI A reads the data from Named Queue 2,Extracts the bits it needs, processes it, then flushes Named Queue 2 and transmits the contents via TCP/IP on physical port 1.

    VI B would run in (say) "Data Aquisition" at "High" Priority.

    VI C would run in (say) "Other 1" at "Above Normal" priority.

    VI A would run in (say) "Other 2" at" Normal" Priority.

    This way you can give your vis hierarchical priorities to determine their reposivenes under loading. You could also get vi B to extract the bits and only put what is required for A and C on the queues (therefore simplifying A and C and reducing memory requirements at the expense of speed) if it has a light loading (if you want). The way described just makes vi B simple and very fast and context switching won't be an issue.

  12. Thanks for all your replies.

    @Ben : good idea with using VI server. I will have a closer look into this.

    @Mark : yes, I am aware that I still could use LVOOP, but I think the whole program would then be over complicated in this case.

    @Shaun: I will have a single device (different variants of this device) connected to a PC on multiple interfaces an protocols

    I think I am going to use my current architecture that I used to develop another application ( see: http://lavag.org/top.../page__p__58451 ).

    I my bluetooth stuff somewhere on this site, I address multiple transport layers. It'd be worth taking a look to see how its implemented but basically it makes the interface (TCPIP, UDP, IRDA, Buetooth in the example) transparent to the protocol. Might be a way forward for you to simplify your low level stuff.

  13. What would be the best approach to design & write an application that need to control a device using multiple protocols via multiple interfaces (e.g. SCPI over GPIB and SNMP over ETH) ?

    Before anyone answers OO, I'd like to complicate this question by saying that it needs to work with LV 7.1.1 (aka not OO).

    Jakub

    Are you describing a single device on multiple interfaces an protocols (e.g a dvm that has gpib eternet and rs232 interfaces that you want to test) or is it multiple devices

    on multiple interfaces such as motors on rs485, dvms on gpib and digitalio on ethernet that are used in conjunction?

  14. I'm not sure why you say it would more complicated to send the references. Just right click on the cluster to create a reference and send that. It seemed a little cleaner.

    I wasn't sure which way would be faster, create fewer data copies, etc.

    George

    Mot much in it really. Labview nowadays creates copies of things if there is a Y in the day and you've just exhaled. The main advantage of data over refs is that data is much easier to debug since you can probe it directly to see the data.

  15. didn't see this thread. Why did you start a new one?:blink:

    Nope. Static VI references are configured at design time.

    I wasn't refering to static VI refs.

    TCPIP, UDP, Bluetooth, and IRDA constants can't be initializied at run-time. You have to create a new one. (I don't have IMAQ installed so I can't check it.) Same with queues and notifiers. Functionally, those constants (minus the Static VI Ref) on a block diagram are only useful for their type information.

    Indeed. Constants are, well, constant :P Immutable, unchangeable! If you can change it is not a constant, its a variable.

    There are no prims you can wire through that magically convert it into a valid refnum. Contrast that with a by-ref object constant. It's useful only for it's type information** unless you wire through its Init method, which magically converts it into a fully functioning object.

    Indeed. Constants are very often used for type information especially with polymorphic vis.

    I looked at your "confusion" image and it seemed to me to be syntactically identical to:

    So I didn't think it unusual or erroneous since I use something similar with self initialising functional globals.

  16. Thanks for the quick reply ShaunR!

    The message string does always have the same number of values but its values aren't padded so the total length of the string can differ from time to time.

    Using "1,HS,%2s,%d,%d,%d,%d,%f" as format string did work when using the "Scan from string", except the last float, which just gave me an integer (for example 1.4 became 1 after the scan).

    When changing the 6th parameter to float, "1,HS,%2s,%d,%d,%f,%d,%f", I get an error saying that the input string wasn't in the expected format.

    The actual input string when the error message shows up is "1,HS,OK,0,21,0,0,1\r".

    Thanks again! :)

    Thats because the number of format specifiers have to be exactly the same as the fields you are trying to extract. From your last example, the last digit is 1. If you are viewing the value in a digital indicator and "hide trailing 0" is set, you will only see the decimal places IF the digits are non-zero.

    If thats not the reason and you want that to be 1.00 I would suggest using %.2f as the format specifier.

  17. Well. There are a few ways.

    If the message string is always in a fixed order and fixed length and messages are not concatenated, you can use the "Scan From String" which will give you the results straight away in the format you require.

    If its not, then you can use the "Spreadsheet String To Array" to break up the string at the delimiters then convert to whatever formats you like.

    • Like 1
  18. How DO port operation will close or open specified relays since it does not have any option to select relays at all

    When you use the port write, the number you wire represents the bit pattern of individual DOs. So "1" would turn on DO 0 and turn off all others, "2" Turns on DO 1 (all others off) and "3" turns on DO 0 AND 1 etc since 3 (in binary) is 00000011.

  19. Do you have the "Database Connectivity Toolkit"?

    Have you done the following?

    <h3 id="tocHeadRef">Create a System DSN in Windows</h3>

    1. Click Start, point to Control Panel, double-click Administrative Tools, and then double-click Data Sources(ODBC).
    2. Click the System DSN tab, and then click Add.
    3. Click the database driver that corresponds with the database type to which you are connecting, and then click Finish.
    4. Type the data source name. Make sure that you choose a name that you can remember. You will need to use this name later.
    5. Click Select.
    6. Click the correct database, and then click OK.
    7. Click OK, and then click OK.

    Original Article.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.