Jump to content

ShaunR

Members
  • Posts

    4,856
  • Joined

  • Days Won

    293

Posts posted by ShaunR

  1. There are several main advantages of FGs over the in-built Labview global, none of which are particularly awe inspiring enough to call it a pattern IMHO.

    1. They solve the read/write race condition problem.

    2. They can have error terminals for sequencing.

    3. They can be self initialising.

    4. They are singletons (unless you set the VI to clone).

    A couple of comments about the blog mentioned.

    I think the author missed the point about race conditions and FGs or at least didn't explain why they aren't a magic bullet to variable races.

    He states "

    The “read, increment, write”, then, is a critical section".

    And in the original example using global variables it is intended to be. But the feature that protects the variable by using a FG is that it is placed inside a sub-vi, not that it is a FG. The exact same protection could be obtained by selecting the globals and increment then selecting "Create Subvi" and putting that in the loops.

    However. In all examples it is impossible to determine consistently what the output of Running total 1 or 2 would be since that would be dependent on the order which Labview executes the loops which may change with different compiles or even different execution speeds of the loops. So in reality, by using an FG we no longer have a race between the read and write operations, but we still cannot predict (reliably) the indicator values (Although we can say that they will always increase). We now have a race condition between the loops themselves.

    The thing to bear in mind about FGs are that they are a programming solution to the global variable read/write race condition only and therefore an improvement over the in-built globals.

    Many, however, would argue that global variables are evil whether a FG or not. But they are simple to understand, easy to debug and, marginally more complex than the in-built global. You can also put error terminals on them to sequence rather than cluttering up your diagram with frames.

    That said. It is quite rare to find a FG as an exact replacement for a normal global. People tend to add more functionality (read, write, initialise, add, remove etc) effectively making them "Action Engines".

  2. ShaunR,

    You've hit the nail on the head. It is one of those "twiddle these until you get that" problems, only 800 times over! I'll check out the Taguchi Analysis. Sounds like it may hold some promise. Thanks.

    It's worth stating, that for something like a voltage (which theoretically has infinite levels) I usually use the max, mid and min values from the spec as an initial start point. Later (once you have run a few through) you will almost certainly find that there will be a much smaller range that will cause the optimisation to converge much more quickly and enable you to reduce the levels to two optimum start points (these will be dictated by the component tolerances). It then just becomes a money vs time trade-off. 1 PC doing 800 in 5 days to 800 PCs in 10 mins. If you have to allow for settling times, then you can get really cheeky and do multiple devices on a single machine in parallel ;)

  3. I'm a great fan of Taguchi Analysis for this type of optimisation. This method is an empirical method where you design experiments (i.e set variables) for interconnected, multi-variable systems and iteratively derive the optimum variable settings that satisfy the criteria. It is ideal for situations when full factorial optimisation is impractical due to the number of parameters and their dependence on each other.

    An example used in anger is here.

    I have had great success with this in the past for things like PID auto-tuning, RF amplifier setup and waveguide tuning (the sorts of places where the engineer will say "twiddle these until you get that!"). Take a look and see if it will fit with your scenario.

  4. I'm curious, why are you supporting older versions of the protocol? What is auto negotiation in Websockets?

    Because (for example, currently) Chrome uses Hibi 10, Firefox uses Hybi 9, I.E 10 (will use) Hybi 10, IE.9 (with HTML5 Labs plugin) supports Hybi 6 and Safari (different flavours) supports Hixie 75,76 or Hybi 00.

    The specs are still fluid and browsers are having difficulty keeping up. If it's to be robust, then really I have to support all of them and besides, some of my favourite stock tickers are still on the old versions :P

    Auto-negotiation:

    In the latest specs, there is a negotiation phase whereby the client requests a connection and the server replies back with the supported protocols. That's part of it (for the server). The other part is the brute force client connection whereby you iteratively try each protocol until one works. This is one of the reasons why I needed to move to a class since with multiple connections all (potentially) talking different protocols, the reads and writes need to switch on each call and maintain their state (i.e closed, connecting etc). Rather than writing a connection manager, it made sense to bundle that info with the class data. Besides. this is TCPIP over HTTP so performance is secondary :lol:

  5. No way, say it's ain't so. In other news, the sky is still blue. Well on Earth anyways, most of the time...

    I'm still pumped about this topic. Granted I got to spend zero time on implementing something like this this year (still very disappointed about that), but 2012 will be different. Yeah, that's it, different. What can I say, there's still a bit of foolish youthful optimism in me.

    Yup. It's not as clean as I usually like (can't use polymorphic VI's with dynamic terminals) but I needed to maintain state on a per-connection basis. So this is one of the rare instances where it is the better choice. It won't happen again, I promise :D (Sky's always grey over here :P )

  6. ShaunR, Will you be posting your fixed code :)

    Not sure what you mean by "fixed code". But I won't be releasing it until the api is completed and tested (it's a bit of a mess at the mo'). I'm currently up to supporting Hybi 17, 10,8, 6 and 00 (looking at Hixie and auto negotiation now) and have a prototype library for exporting events and commands (i.e. for remote monitoring), but still need to figure out the reciprocal methods. And, (this'll surprise a few people) some of it is LVOOP. :rolleyes: . Oh. And finally, you get events for errors, status, messages etc ;)

  7. By coincidence I'm working on a similar thing right now: Message objects via TCP. Like you, I've mostly done two VIs on the same machine (except for one brief proof-of-principle test between England and California which worked fine). The one issue I can add is the rather large size of flattened objects, especially objects that contain other objects (which might contain even more objects). Sending a simple "Hello World" as one of my Message objects flattens to an embarrassing 75 bytes, while the "SendTimeString" message in my linked post (which has a complex 7-object reply address) flattens to 547 bytes! I've just started using the ZLIB string compression (OpenG ZIP Tools) and that seems to be a help with the larger objects (compresses the 547 bytes down to 199). I've also made a custom flattening of the more common objects to get the size down ("Hello World" becomes 17 bytes).

    -- James

    If you use the transport.lib in the cr, it will give you transparent zlib compression (as well as encryption, send timestamps and throughput) across tcpip, udp and Bluetooth.Might be useful.

  8. Why does it need to be disconnected? I'm asking (this seemingly inane question) because if you have to have a wire running out to a wireless router/DAQ, that won't help if you have to disconnect at the sensor terminals. Additionally, you will still have to have a power lead to the wireless device so it just complicates and moves the problem.

    If moving the problem further up the cable is OK (e.g you have to have a cover over the hole), then perhaps just cut the cable and put a male to female connector in the covering or a connector just outside the hole enabling you to disconnect it.

    The only other (non-cable) alternative is using a battery powered device (like the Arduino as mentioned by François). You could use bluetooth or wireless (bluetooth is better for battery life, but wireless will give you a better range).

  9. I didn't say it was a problem. I said it needed to be checked. You would need a different resistor value and what's considered "high" would be different.

    Parallel ports are TTL compliant. So that means anything between 2.7v and 5v is considered "high" (conversely 0-0.5v is low). A diode only needs a forward voltage of about 2v so it's not a problem. The resistor isn't there as a potential divider. It's there to limit the current so you don't fry the port and/or the LED. For this purpose, the lower the voltage, the less current->good thing! A 4k7 resistor (470 with pull-ups if you want to be ultra safe) will give you about 1ma@5v with no pull-ups, or, if you like 0.7ma@3.3. If you find its not bright enough then a 1K will give you 5/3ma but I wouldn't go any lower without buffering.

    But it's not hard. You can forget the maths. A 10k pot and an ammeter will give you the perfect values for your port. Just twiddle it (the technical term) until you get the brightness you want whilst keeping an eye on the current. You can then measure it and find a preferred value for when you "productionise" it

    You'll be wanting to drive LCD displays in no time ;)

    3.5 Using the Parallel Port of the Computer (click on "more" to expand the article)

    And when your motherboard gets damaged because you connected something improperly, or did not properly ground, or did not properly account for potential overvoltages or voltage spikes, then we'll see what's cheaper: buying a new computer or buying a cheap digital I/O module. :P

    P.S. I always use a screwdriver as a hammer. :D

    You'll only blow the port. If you never use it, you won't miss it :P

  10. I saw the Hello World example and hit Back. :blink: The scariest thing about languages like this is that someone has to invent them ...

    The use of encryption is just obnoxious, though.

    The scariest thing (IMHO) is that people go to the effort :P

    This is a fun one though.

    LOLCODE

    
    HAI
    
    CAN HAS STDIO?
    
    PLZ OPEN FILE "LOLCATS.TXT"?
    
    AWSUM THX
    
      VISIBLE FILE
    
    O NOES
    
      INVISIBLE "ERROR!"
    
    KTHXBYE
    
    [/CODE]

    • Like 1
  11. This would actually need to be checked. If a computer comes with a parallel port it's likely to be 3.3V, not 5V. I know the old Dells we still have in the lab are 3.3V parallel ports.

    Why is 3,3v a problem?

    In the end, it's probably better to go with an off-the-shelf cheap USB-based digital I/O module. There's tons of these on the market.

    Chicken :lol: Seriously though. This is kindergarten stuff. But if you've never used a screwdriver as a chisel, then maybe it's better to just throw money at it. :D

    melman.jpg

    Not one of mine by the way :P

  12. Can Labview control the LED's via the PC parallel port (old printer port)?

    Much better than serial (8 bit bi-directional, i.e digital inputs OR outputs :) ). My favourite low cost digital IO. Fantastic for foot-switches and system monitoring and essentially free. Unfortunately, not many PCs come with them nowadays.

    http://digital.ni.com/public.nsf/allkb/B937AC4D8664E37886257206000551CB

    There are also a couple of examples in the "Example" finder.

    You have to check whether your motherboard already has pull-up resistors (most do, some don't). Then you can connect 5V LEDs directly or just short them to ground (if using as digital in). Note that logic is reversed since you sink to ground THROUGH the IO line to light an LED. I always stick a transistor in there too to be on the safe-side, since if you get it wrong...you blow the port. It also inverts the logic so I don't get confused (happens regularly).

    http://www.beyondlogic.org/spp/parallel.htm

    • Like 2
  13. I've been using 2011x64 a bit lately, and the speed hasn't been an issue at all. Now, the functions don't always work, but that's a separate issue. What have you found?

    Not an issue. Comparatively. X64 is slower in most (all?) of my use cases than X32 (regardless of LV version) So I was simply implying compare apples with apples.

    Don't get me started on the "don't always work". Still can't back-save to <8.5 and 2011 x32 just crashes on start-up since it was installed :shifty:

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.