Jump to content

ShaunR

Members
  • Posts

    4,856
  • Joined

  • Days Won

    293

Posts posted by ShaunR

  1. Your proposal will work well. It''s a centralised system and is proven to work well on small scale apps thumbup1.gif

    You could also consider a "de-centralised" system where each "module" (your serial thingies) is a producer AND consumer so that data can not only be streamed from them, but control messages can be sent to them.

    Consider, for example, that your new UI module wants to stop one (or all) of the serial modules - a common requirement. In a centralised system, the responsibly for that lies with the messaging node (your consumer) since it is the only part that is aware of the "STOP" message. At that point, things can get ugly as the app grows since now it's no longer just a consumer (just processing data), it's also a controller - so the responsibility boundary becomes blurred.

    In a decentralised system, you can (don't have to...but can) pass the message straight from the UI to one or all of the modules directly. It doesn't stop you having a centralised processing point for data being emitted from the modules. It's just a little more "modular" in terms of how everything can fit together.

  2. Well. I've got no idea what you are talking about. biggrin.gif

    However. A LV picture speaks a thousand words so a quick glance at your code and it looks like you are just wanting to detect sign changes between 2 arrays of doubles?

    If this is correct then I would do something like this:

  3. Well that the DLL call is done in the UI thread may be just a byproduct. What I was meaning to say is that the execution of the entire XNode may be forced into the UI thread. I have never bothered to look into XNodes and how they work, but it wouldn't surprise me if they are more of a quick and dirty hack added to LabVIEW than a properly designed feature, which might be one of the reasons that it never made it into a public feature. And taking the shortcut of executing XNodes in the UI thread would make creating such a feature much easier. Of course it's not ideal to do it that way but hey I have no idea what the intended use cases were so it may have made perfect sense.

    Xnodes don't quite work like that (you would think it was similar to an Xcontrol...But it isn't) They are basically pre-packaged script nodes that are programmed to generate code when executed. So the xnode in question has lots of script to create a CLN and all the function parameters to interface to the DLL. The result is that at some point (not sure exactly when, maybe after compilation, or when you press the run button) the xnode runs, then creates the code and it is this generated code which runs in the place of the xnode - In this case a CLN in the UI thread.

    It's a lot of [script] code to generate a relatively small amount of real code that could have been created as you described and wrapped in a polymorphic VI for the adapt-to-type (no need for the intermediary DLL then and it wouldn't have to execute in the UI thread).

  4. Well that the DLL call is done in the UI thread may be just a byproduct. What I was meaning to say is that the execution of the entire XNode may be forced into the UI thread. I have never bothered to look into XNodes and how they work, but it wouldn't surprise me if they are more of a quick and dirty hack added to LabVIEW than a properly designed feature, which might be one of the reasons that it never made it into a public feature. And taking the shortcut of executing XNodes in the UI thread would make creating such a feature much easier. Of course it's not ideal to do it that way but hey I have no idea what the intended use cases were so it may have made perfect sense.

    Xnodes don't quite work like that (you would think it was similar to an Xcontrol...But it isn't) They are basically pre-packaged script nodes that are programmed to generate code when executed. So the xnode in question has lots of script to create a CLN and all the function parameters to interface to the DLL. The result is that at some point (not sure exactly when, maybe after compilation, or when you press the run button) the xnode runs, then creates the code and it is this generated code which runs in the place of the xnode - In this case a CLN in the UI thread.

    It's a lot of [script] code to generate a relatively small amount of real code that could have been created as you described and wrapped in a polymorphic VI for the adapt-to-type (no need for the intermediary DLL then and it wouldn't have to execute in the UI thread).

  5. Looked at the config file port in the installer but hadn't played with it yet. So in this case, I could have the installer set up the cDAQ in MAX and configure the names of the cDAQ and the module to use? Then just use the serial number and a unique PC identifier in the file name to track usage as Shaun mentioned? Or is there something I'm missing here? Or something even easier or slicker?

    -Ian

    Depends on which way you are looking at it.........

    Set it up once in MAX (make sure it all works). Export an nce file. Add it you your project. and set it in the installer. Once you build your installer you will only need to put it on a CD or USB stick or whatever and install it on any machine So effectively you use MAX to create your hardware "template" and then roll that out with the installer which will take care of configuring the target machines hardware. Your software will always use the same taks/channels regardless of which machine it is on and, as long as you put the C ID and DEV ID somewhere (file name is good if you just want to see at a glance in xplorer without opening files), you'll be able to track the results to the hardware.

    I would suggest also adding the date to the file name (e.g 2001-01-21) in that order then when you view the results you can sort them in explorer. Additionally, you can set the file date as a directory name and store your results by date making it easy to identify what was tested on which days. But it's just personal preference (one directory with lots of files or a few directories with a few files)

  6. Use the same name (in the tasks/channel/hardware configuration) or all the DAQ devices (say cDAQ).

    In your results file also save the DAQ devices serial number (you can read this using a property node shown below) and the computers ID (anything that identifies a unique computer - computer name, network card, IP address, HD serial etc). This way the executable will run on any suitcase, with any PC/DAQ (as long as they are all the same) but will save data that is traceable to a particular suitcase PC and/or a particular cDAQ.

  7. Benchmarked in 7.1.:

    1 million iterations

    straight wire through 186 ms

    error case structure 306 ms

    so the case structure costs about 120 ns.

    Hey, just checked without any wire (so a clear error vi).

    424 ms!

    So this is most likely slower due to the buffer reuse in above cases.

    Test in newer version: takes more than 500ms and the difference between straight wire and error case struct is almost gone.

    Felix

    Set the vi to subroutine (assuming you are using a sub-vi vi in a for loop) and you'll probably halve those timesthumbup1.gif

  8. Not being able to look into the XNode I can't really say much about it. But it most likely doesn't just call MoveBlock() only but probably does some other things too, that are not strictly necessary in all situations. Not knowing the exact circumstances it is called in it may do so just to error on the safe side. It's definitely more complex than just calling two LabVIEW manager functions, and possibly XNodes also are limited in the way they are executed. Wouldn't surprise me if XNodes always executed in the single threaded UI Execution system, and that could be a major performance killer.

    Well. I thought I'd get my hands dirty and take the xnode apart. You are right. It doesn't just call moveblock. In fact, it doesn't call it at all!. It calls "GetValueByPointer" in "lvimptsl.dll" (that might call moveblock). But, more importantly (as you surmised), it does this in the UI thread (presumably because the dll isn't thread-safe). So. You're on a roll thumbup1.gif, That's the reason it's so slow (and totally unusable for 99% of my apps).

    But your suggestion works a treat worshippy.gif

  9. Don't!

    [rant]

    Yes IT has a duty to protect the precious data of your company, however they should feel the pain when they screw up. Not you!

    Recently our IT department has enablde User-agent filtering....

    :throwpc:

    [/rant]

    Ton

    Thats ok. Just write a quick proxy filter to modify it on the fly (if your not using firefox). Make sure you add to the version info field "IT ARE ANALLY RETENTIVE NUMB-NUTS " (in capitals so it's easy to spot in their logs)

    Oh the joys of local admin privileges :)

  10. All I said above is that I 'hope' NI can't do it and if they can then IMHO I think it means it's not 100% secure - which you just agreed with above??

    The point is. Under windows. No dialogue based password system is 100% secure whether it's from NI or not (although I bet NI wouldn't have to spend as long as me circumventing VI passwords ;) ) Password protection is like a key to your front door - it only keeps out law-abiding citizens rolleyes.gif Of course. I'm a law abiding citizen, so I would never consider circumventing NIs protection cool.gifbiggrin.gif.

  11. To return to a point I made awhile back, if you UI and core engine cannot be on separate computers on a network, then you are not decoupled.

    Well. I think that's a bit extreme. That's like saying it's not programming unless it's POOP wink.gif.

    Whilst a network interface can be used to achieve decoupling. I don't think it's a requirement - just an implementation method,

  12. Password protected VIs definitely get recompiled on load when they are not in the current LabVIEW version. I'm also not understanding why Open VI Reference should fail to return a VI reference for a password protected VI when you don't provide a password. The password really only is required to make the diagram visible, so the Open BD method and some other operations are all that should fail on such a VI reference, but not the Open VI Reference itself. Have you tried to play with the flags parameter to Open VI Reference?

    And jgcode can you enlighten me what would be the security issue with allowing to get a VI reference on a password protected VI and being able to execute the Compile method on that? I totally fail to see any security issue with that.

    The open vi reference will supply valid reference (ignoring projects for now). But the "compile" property will fail with 1040.

    (See attached)

    Happy to stand corrected... Well wouldn't LabVIEW need access to your block diagram in order to compile it? Meaning password protecting your VI's isn't completely secure? If I PP a VI - I want no one to access it - not even NI!

    Nothing is password protected with time and soft-ice wink.gif It just depends on how badly you want it (in most cases for me....not badly enough...lol)

  13. Hii everyone,

    I want to perform image stitching on a set of four images taken from 4 camera (mounted 90° apart), for image stitching.

    For this, I found an algorithm on internet (pano tools, Pano12.dll). Now the problem is, as there is not enough documentation/examples available, I'm unable to use the dll.

    I tried using DLL Export Viewer by NirSoft to list all of the exported functions, but still unaware of parameter lists.

    And also I'm new to 'calling external code' in LabVIEW.frusty.gif

    Please help in this regard.

    Thank you.biggrin.gif

    Well. It's open source is it not? You only need to download the source and look at the function prototypes. It will also give you all the header files that you will need for passing structures.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.