Jump to content

ShaunR

Members
  • Posts

    4,855
  • Joined

  • Days Won

    293

Posts posted by ShaunR

  1. I know what cluster bomb is, but what is a bomb cluster? smile.gif

    The same thing but in Arabic :P

    I can see this thread is going to get very silly, very fast...lol.

  2. Yes, I agree, there is only 1 copy made of the data. Or 0 copies if the subvi deallocates memory. But attach the array in the subvi to a control in the main vi and 3 copies are made. Given no local variables, no references, no updating -- I'm still not sure where they are all coming from.

    It sounds to me like what you're saying is that LV sees that 150MB worth of data has entered a vi from somewhere and it decides to make a 150MB buffer for that vi. Then when the data actually goes somewhere (ie, gets wired to a control) it passes thru that buffer, but that buffer is not used for making the 2 usual copies of the data for the control; 2 more copies are added. I've been saved the time of having to allocate 150MB of memory next time I run the program, but in the meantime, my 150MB of data is eating up almost a half a Gig of memory.

    Am I getting close??

    We may be getting close...

    Thanks for the suggestion, but I read that article before bugging you all with this. I actually do decimation on some of my graphs, but in this case I can't. It often gets used for transient analysis -- where 2 or more plots are compared in time -- and the user needs to get down to sample-level resolution.

    We had a similar problem with large images. Our solution was to view at different resolutions (i.e decimate) and load from disk only the sections of the image the user could see as he zoomed in rather than try to keep the whole image in memory. So in the whole picture we had reduced resolution but as he/she zoomed in it would reload at finer and finer granularity. You could do the same thing with a graph after all, when viewing the whole timeseries of lets say 1 week, can a user really visually resolve 1us? As he changes his axis, you could reload that section of data and at some point (where you decide) you no longer decimate but load raw data.

  3. I'm just looking for something to spark something in me to get me going in the right direction. I need to find a valley that occurs just before a 150 mV rise in the signal. After that I need acquire data for several more seconds. The signal before the valley will be slowly falling and the rise after the valley will be somewhat sharper. There is no absolute value I can look for - just the valley.

    I figured I'd need to continuously sample the DAQ channel and then grab fixed chunks of the buffer and examine the buffer. The problem with that approach is that I might grab a portion of the signal that contains the valley but not the full 150 mV rise. In which case I would miss it.

    Any ideas would be welcome.

    George

    If you can define the trough as a rising, falling edge or it drops below a certain level, you can get the analogue card to trigger aquisition. Take a look at

    Acq&Graph Voltage-Int Clk-Analog Start w Hyst.vi in the examples and see if it will suffice.

  4. We probably did :rolleyes:.

    Well there is another solution although it is in fact in a sense a global too. And your original solution with the IMAQ ref already goes a long way into that direction.

    Make that VI an Action Engine or as I call them an intelligent Global with a method selector. Calling the Init method from the pre-sequence where it does the opening of the resources (and that deamon stays in memory polling the same or another Intelligent Global for the stop event). Then the Execute method or methods do whatever needs to be done on those resources and the Close method is called from the post-sequence step and closes all resources as well as sets the quit event for the deamon.

    Rolf Kalbermatter

    Indeed.

    In fact you can have the best of both worlds (an "Inteliigent Global Daemon" if you like). If you make the Intelligent Global/Action Engine re-entrant and "share clone instances". You can have a boolean control that causes it either to continue running or exit immediately but it only has one dataspace. It means the first time you call it you set it false (runs as a daemon cos its connected to the conditional terminal) and you can still call it in other areas with the boolean set to "True" to retrieve/set the data. Used sparingly its very useful.

    Hmmm. That sort of solves the OPs problem really.:frusty:

  5. Guys,

    I have checked, and the three enthernet ports are sitting on the PCIe x1- which should give me 250MB/s. I have never played with Jumbo Frames. Is that essentially a setting for each ethernet.

    I think its time to ping the mobo manufacturer.

    Peter

    Whats the model number? Some older PC motherboards used a PCIe to PCI bridge effectively giving you the slots but not the bandwidth. The figure you are seeing reeks of PCI.

    PICe x1 cards are relatively rare in comparison to x8 and x16. I'm surprised you have them!

  6. Well but that is about the VI refnum itself. However this does not solve the issue with other refnums opened inside that VI once that VI goes idle (even when the autodispose is set to ture when opening the VI ref and that VI does not close its refnum itself) and in TestStand this is the most simple way to open common resources, by opening them in a pre-sequence step. However that step simply runs and then stops.

    The only way to circumvent that is to keep the VI you are launching in the pre-sequence step running as a deamon and then shut it down in the post-sequence step.

    Rolf Kalbermatter

    I think we are both arguing the same point :unsure:

    As I said. The self configuring vi (and yes I can do the above in both DAQ and IMAQ as well as VISA) keeps the refs LOCAL to the vi. If it (the one that was launched) goes idle then yes the refs disappear and I wouldn't have it any other way. However, if the launching vi goes idle then the vi remains running unless the AutoDispose was set to false (as in the OPs case) in which case it no longer remains running and halts when the calling vi exists and bang goes your refs. If there were such a thing it would be a "Daemon" design pattern.

    For the OPs case, this self configuring daemon seems a bit awkward since he has no way to access the refs because he can't call it as a sub vi. That can easily be overcome with a global (oooh dare I say it :) ) or simply reading the ref control with a property node. However the general intent is NOT to make the refs global but encapsulate complexity to make an autonomous, fire-and-forget sub system.

  7. No! Once the Top level VI, in whose hierarchy the Open/Create/Obtain LabVIEW refnum was executed, closes, that refnum is disposed without mercy. VISA is a notable exception since you can change in the LabVIEW options that VISA sessions should not be autodisposed.

    Rolf Kalbermatter

    Not if you set autodispose to true when you load the vi dynaimically.. Then the launched vi is responsible for closing its own refnum so the IMAQ refs remain until you close the launched vi. Perhaps I should have started with the IMAQ example instead of the VISA. I only chose visa because it was the first pallet item I came accross (visa autidispose is the first thing I set when installing LV by the way).

    No you cant, if you later run a "Get" to get the image out via the reference then it will return an error since the image reference has been closed down. Remember that images are transferred "by reference", there is no image data in the wire. And I need to access the image in a later step.

    Similarly with Tasks, if I start a task (for instance a counter) then I cant just restart it if it has been closed down - it is continously counting of pulses.

    As I pointed out, this technique keeps the ref local to the vi. If you want to access it outside the vi then you have to transfer it using a queue or global. If you need to do this then its not for you and the normal functional global is more appropriate.

  8. Shaun, the problem is not with the VI ref, but with other refs opened inside the dynamic VI.

    The general rule for reference cleanup is that a reference is automatically destroyed when the top-level VI in the hierarchy where it was created stops running. In this case, the dynamic VI is the top level VI.

    What you want to do for something like this is change the hierarchy which owns the reference. One way of doing this is to make sure the LV2 global is first loaded in a hierarchy which remains in memory (e.g. by having your main VI call it first). Another is to move the reference generation to a daemon which runs continuously and stays in memory. The daemon can accept requests using mechanisms like queues or user events.

    Ahh. IC. Been a long weekend :P

    I use self initialising vis for exactly this (usually, VISA and IMAQ). Not a million miles away from a functional global, but it keeps the reference local to the vi using it and is released when the vi is closed.

    something like this....

  9. Hello everyone.

    I have an application where I call most VIs with the "Run VI" method (Wait until Done = T, Auto Dispose Ref = F). In some of these VIs I open references (DAQmx Tasks, IMAQdx sessions etc.) that should be kept open after the VI stops running, because I need to use them later in other VIs. I am using a functional global to store the references.

    An example of this is starting a DAQ counter Task, storing the Task in a functional global. Later getting the Task from the functional global and read out the value of the counter.

    The issue is that references opened inside a VI called with the "Run VI" method are closed by LabVIEW when the VI stops running, in an attempt to avoid memory leaks (at least that is what I think it is). How can I awoid this?

    Hope you can help me out.

    /Simon

    Set autodispose ref to true and don't close the ref in the calling vi.

    http://zone.ni.com/reference/en-XX/help/371361E-01/lvprop/vi_run_vi/

  10. All,

    Good suggestions!

    I have tried the teaming of adapters already. It seems that with each ethernet card being point-to-point to individual server ethernet card it does not give me any throughput improvements. I am not sure if I require a switch that supports aggregation to see any bandwidth in excess of ~115MB/s. I have recently tried the test with the Win32 File IO read/write benchmarks (downloaded from NI). I mapped three network drives over three point-to-point ethernet connections using map option under windows. Over the three drives I was writing to the server RAM disk three independent files at the same time (I will try to do single file stiching at a later time once I get this bandwidth issue sorted out). It seems that I have not gained any bandwidth increase, still ~115MB/s. I am not sure if the client three ethernet cards are not capped in terms bus of bandwidth. If I run the same benchmark using just one of the 1GbE link (not three at same time), I also get 115MB/s.

    I do use Intel adapters. Yes, I would love to go to 10GbE, the problem is that the client platform so far does not allow for that upgrade.

    Any suggestions are widely appreciated.

    Peter

    Aha.

    Sounds like you've reached the PCI bandwidth limitation, which is about 133MB/s max and about 110MB/s sustained. Not a lot you can do about that except change the motherboard for one that supports PCIe.

    PCI = 133MB/s Max.

    PCIe x1= 250MB/s Max.

    PCIe x4 = 1GB/s Max.

    PCIe x8 = 2GB/s Max.

    PCIe x16 = 4GB/s Max.

    • Like 1
  11. I'm a bit ambivelent about this article. I thought a "duct-tape programmer" was basically an un methodical script kiddy type. But he describes any programmer that applies KISS. After all if you have 2 weeks to write some code that puts a "widget" in a "doodah", why spend 2 weeks writing code to interface to other code (that you have still to write) that puts a "widget" in a "doodah". That seems to be the gist of the article rather than lauding seat-of-yer pants programming.

    From experience. there are (simplistcly) 4 phases to software projects. Design, Implemention, Fire-fighting and Delivery. KISS programmers do a lot less of the fire-fighting because they have less to go wrong.

  12. All,

    I am trying to concotion a way to utilize multiple 1GbE ports for streaming large amount of data to a server computer. Lets say I have three 1GbE point-to-point links to the server machine (and can dump data for fast write to RAM disk) therefore I will be link limited. Is this at all possible? Anyone has some hints for this implementation. In the end it is a file that needs to be moved from the client to a server. Will this parallel multi 1GbE implementation give me increased data throughput?

    Example:

    The client Eths with 192.168.0.4, 192.168.0.5, 192.168.0.6

    will be directly linked to 192.168.0.1, 192.168.0.2, 192.168.0.3, i.e. .4 talks to .1 only. I guess in the end one has to run these separate processes in a way so that file will get assembled on the server side correctly? Any way to do this dynamically for dynamic amount of 1GbE ports?

    Any suggestions are appreciated.Thanks

    Peter

    Most modern adapters allow teaming with multiple ports (so you could team your servers adapters)

    http://www.intel.com...b/cs-009747.htm

    The old way used to be (under windows) creating bridged connections. Well, I say old way, you still can, but its more CPU intensive than allowing the adapters to handle the throuput. There is an overhead involved, but it is simple to implement and scalable.

    Sounds to me like you want 10GbE :)

  13. I posted this at NI with no luck. Hope someone here can help out.

    I need to get continuous measurements with a PXI-4071 DMM (current measurement) and a PXI-6254 DAQ (voltage measurement) but I can't figure out how to synchronize the two measurements. I'll be sampling continuously and reading the buffer at reqular intervals to compute a running average energy so I need the two measurements to line up in time. It's possible (but not as desireable) to get the current measurement with the DAQ, but the DMM would give us a better measurement. The only way I can see to come close to what I want is to trigger both measurements, but then there will be a gap in the measurement while I read the data and setup for another trigger.

    George

    You need to use the RTSI.

    http://zone.ni.com/devzone/cda/tut/p/id/4761

  14. Thank you for your reply. I understand that without registration, I still own the right. But for one special case, I do need a paper certification to satisfy a process. It does not seem that copyright office accepts graphical programming languages' source code, they need something "readable". Isn't LabVIEW readable? (well at least for LabVIEW programmeryes.gif ). I will check out the UK office as you suggested above.

    Thanks,

    Irene

    I think they are just being pedantic.

    The purpose of registration is purely to have evidence (its irrelevent what the evidence is, be it human readable or not) that can be attributed to a date so that you can prove that you thought of it first. Its not like a patent (whole different ball game) whereby a search must be made and you need to prove it is unique and has not been thought of before. The Copyright Office is merely a "vault" where you can register evidence of your copyrightable material. The evidence will only be applicable once you (as in the copyright holder) take someone to court for infringement. Then you can use whatever tools you like (including Labview) to prove you are the original author.

  15. Hi,

    I hope I don't have to bother doing this, but was asked have to do so for a case. Is it possible at all to file a copyright registration for a software written in LabVIEW? The copyright office needs some 10 pages source code, but only readable text based... how would LabVIEW application get a text based source code? I said what about AutoCAD, they said AutoCAD can convert into script... Has anyone tried this before?

    Thank you,

    Irene

    Copyright is an automatic right of the author unless some other contractual obligation takes precendent. It is usually sufficient to add a copyright notices to your software. The issue is only really applicable if infringement takes place and registration with a copyright office is basically registering with an authorised, third party witnessing service that can verify the date and contents of the of copyrighted material to add ammunition to your defence.

    However. It is a powerful defence.

    The copyright office in the UK allows online-uploading of copyright material (for a fee) and there is no requirement for the uploaded contents since it is irrelevent. They even have a "Registration Update Service" so that you can upload "in development" work as and when it changes. If you are having issues with beaurocracy in your home country I would suggest using them.

    UK Copyright Services

  16. Hi,

    Is anybody have experience to create MUI(multilanguage User Interface) in LabVIEW? Can you please share?

    We use some vis that enable dynamically changing between certain languages like english, French, German etc. I canot share the code because they are owned by the company but (as I wrote them) I can tell you how they work. There may be others who are not restricted to share. The key is really not to "hard-code" strrings into the diagrams and takes a little bit more thought but is fairly easy to do. don't talk to me about Japanese or Chinese however...lol.

    They take advantage of labels and captions for controls and indicators. The Label is used as the "tag" and the Caption is what is displayed to the user. Display strings are a bit more complicated, but not much.

    A create language file vi is used to iterate through all labels in the app and save them to a" bar" separated spreadsheet file. We use the bar "|" symbol because it then allows the use of commas, semicolons etc in the text. This file is then sent off to the translators and they add the words (or phrases) for the other languages on the same line for each language (first column English, second French, third German....etc).

    A change language file vi is used to load the file and iterate through all the labels changing the captions to the specific language. You have to check the control type as for things like boolean indicators, its not the caption the user sees. Job done for controls and indicators. This can be invoked at any time and the current language is saved to a config file so that the software starts up in the last used language.

    Display strings in the code are always preceded with a get language file vi that takes a string as an arguement (the tag as described above) and outputs the corresponding word/phrase in whatever language is chosen to be active. I've been meaning to get around to including a digram search for these tags so they can automagically be added with the create language file vi, but I've more important fish to fry.

    Thats basically it. It shouldn't take you very long to write something yourself, or I suspect others on this forum have similar tools that they can share.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.