Jump to content

PA-Paul

Members
  • Posts

    139
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by PA-Paul

  1. Hi all, I'm writing an application with a GUI containing a tab control (amongst other things!) and one thing I want to do is initialise the "value" of the tab (and various other FP objetcs) when the app runs. As there are a number of things I want to initialise, I decided that I should create a cluster (strict.TypeDef) of references to each FP object I might want to control from within a Subvi somewhere within my app. This is the first time I've really used control references, so its taken me a bit of time to sort out. To start with, I created a custom control, placed a cluster and then filled it with "control refnum" controls, each appropriately named for my FP objects. This worked ok, but when using subvis linked to these references, I could only set "generic" properties within property nodes. I then discovered that I can assign "type" to the control refnums by dragging and dropping controls of the appropriate type into the refnum. Great, now I can set type specific properties within my subvis... The problem I encountered was that, the tab control on my FP was saved as a strict type def... I found that if I dragged and dropped my (strict typedef) tab control into a control refnum, and then placed that within my (strict typedef) cluster of refnums, if I subsequently modified the tab control, it broke the refnum cluster... Is this normal? As I say, I'm quite new to this whole "refnum" thing! I have ended up using a control refnum with a generic tab control "in" it, and with "include data type" disabled in the options. Which works, as I can still set "tab control specific" properties... Is that the best thing to do? I also disconnected my tab from the typedef, since actually, its only ever used once and on the FP, so it doesn't make much sense to me now to have it saved as a strict typedef! Anyway, any advice on using refnums and strict type defs would be gladly recieved! Thanks Paul
  2. QUOTE (shoneill @ Feb 25 2009, 03:29 PM) You can already get into that mess by dragging the top of the property node upwards... Perhaps something it should be selectable (reversible) and the size of a typical SubVI? So that you don't get error bends on the default size and you can choose whether to have the properties above or below?! Just a thought
  3. Statically (I think?!), I've attached a picture of the code I was just working on as an example... Its part of a bigger application to control a system which contains a DAQ device. Since there is a possibility that my system will be connected to a PC which already has a DAQ device attached, I use the serial number of mine to find the device name and then populate a cluster of channel names which I'll use later on in my code.... Anyways, you end up having to do several operations below the "error line" which I don't like (although functionally it doesn't matter obviously!) Anyway, as I was say its just something that struck me, I have to admit, I didn't think it through a huge amount!! Cheers! Paul
  4. I was just doing some coding and getting information out of a property node, and it struck me that, from a style point of view at least, property nodes are upside down... Having recently read Peter Blume's Labview style book, one of the big conventions seems to be that error clusters should wire in an out of the bottom of subvis... So why do we have to wire errors in and out of the top of property nodes? I like my code to have a nice continuous error line somewhere near the bottom, but as soon as you start dealing with a few properties from one node, you can't do that neatly anymore. Wouldn't it make sense for the base part of the propery node (i.e. the class label) to be at the bottom with the error, and then you could open the properties terminals upwards... Anyways, I probably just have too much time on my hands! Paul
  5. I tried again this morning with a real DAQ device (NI USB6211) and an external signal generator, and saw the same results as above. I then tried again, adding a semaphore to the code to protect the "Read property node and Read.VI" combinations in each loop. This worked a treat... I guess, thinking about it, it makes sense. All I'm doing in my code is sending a DAQmx task NAME reference to each loop. There is still only one DAQ task... Its also quite possible, with the two parallel loops, that the top loop's read property node may be called after the bottom loop's property node but before the bottom loops Read VI, hence changing the properties for the bottom loop read... (which would explain what I was seeing in my graphs). Using semaphores to protect each "property - read" combination seems to work fine and I presume won't affect performance significantly (unless, I suppose, the bottom loop is doing a particularly long read or having to wait for new samples to be acquired into the buffer...) Anyway, since this isn't really a "data base and File IO" topic any more, can someone tell me how to get it shifted to a more relevant section? I'd be interested if anyone else has any thoughts on the matter! Cheers for your support everyone! Paul
  6. QUOTE (jdunham) Sorry, should perhaps have explained a little more. The daq device in the code (Dev2/AI0) is simulated (using MAX) and produces a very slow sinusoidal variation (I don't know if this is default behaviour for a simulated device?!). Since the sinusoid is very low frequency (compared to the 100 ms delay in the top loop), I expect both loops to produce a sinusoid. However, since the lower loop is not reading back quickly enough (the delay is set to 30 ms, and its reading back data in blocks of 20 with a sample rate of 1KHz), its sinusoid should appear slower than that of the top loop... This works to an extent, however in the version I posted, the lower tends to produce a reasonable sinusoid with occasional "glitches", but the top loop produces a sinusoid but with lots of "glitches" which, if observed over a long enough period of time, show that its actually reading back the wrong data: When I placed the loop contents into subvis, I think they argued a bit! there are short (1-3 s) interruptions where nothing appears to happen, and the sinusoid in the top loop became discontinuous (If it can really be called continuous in the above picture anyway!). This is the result of that: So, basically, it seems that splitting the DAQ task between two parallel loops does not force DAQmx to create a separate, independent "thread"... so you can't (it would appear) access the buffer "parallely"... I'm not really sure where to go next with this. What I was hoping for was a way to read data from the same daq task in two (or more) locations within one application. (similar to this little demo), but so far at least this doesn't look possible... (short of using TDMS and "buffering" the data to disk anyway...) Any more thoughts? Thanks for your help! Paul
  7. Hi, Sorry to resurrect this... I'm playing with reading back data from the DAQmx buffer... What I wanted was one DAQ task - in my example, a simple AI read, continuous sampling, but I wanted to read the data back in two parallel loops... I set up the attached VI as a test... what should happen is the top loop simply reads back the most recent sample available and places it in a shift register (building an array - not the most efficient method I know). The second loop should read back all of the data (in blocks of 20), but I've delayed the loop so the read will ulitmately fall behind the buffer. In this case, what you should see is the two waveform graphs "oscillating at different rates"... but there seesm to be some kind of "cross talk" going on... like the two "reads" and property nodes are interfering with each other... Is what I want to do actually possible? Or did I miss understand what was said above: QUOTE (JDunham) Any help gratefully recieved! Thanks Paul
  8. QUOTE (jdunham wrote) Thanks, that clears things up. For some reason, I thought the DAQ device (in my case the USB 6211) was storing the data "on-board" and waiting for my labview code to "go and ask" for the data before sending it to the PC... It makes more sense if DAQmx is getting it directly as fast as possible and I'm simply asking DAQmx for the data (also explains how the whole multiple reads of the same data can work!) One more question (sorry!!), if I set up a continuous acquisition task, like the one in the picture I posted last, but never read back any data, will DAQmx throw up an error? Or will it only give the error if I try to read back samples its over-written in the circular buffer? For example, could I set up the continuous task, and then wait an hour (or whatever) and then use the "DAQmx read" property node to get the most recent sample? (I'm not sure why I'd want to do this, but it helps me build a better picture of how it all works!) Thanks again for the help! I'm glad I found these forums! Paul
  9. Hi, Thanks for the info, I've had a look through and will have a play with that... One other DAQmx question that might help to clear things up for me... If I setup the following: The DAQ device starts acquiring data from channel 0 at a constant rate (in this case 1000 Hz). My question is "where is that data stored"? I'm using a USB6211 multifunction DAQ device, is the data being streamed to my PC even if I don't call the read function? Or is it sat in a buffer on the card waiting for me to call the read VI? Sorry if this is a silly question, I've been using DAQ for a while, but I've never delved too deeply under the hood! Thanks for your help! Paul
  10. QUOTE (jdunham @ Feb 14 2009, 03:03 AM) I'm a little confused, could you post a quick example of what you mean about reading data more than once? Do you mean something like this: http://lavag.org/old_files/monthly_02_2009/post-14639-1234610446.png' target="_blank"> Or do you mean create two separate daq tasks for the same channel? Thanks for your help! Paul
  11. Hi All, I'm in the planning stages of an application and was after a bit of information. I'm doing some continuous DAQ and was wondering if it might be sensible to stream the data from the card (USB 6211) direct to disk in a binary (I've seen this a couple of times - binary being the quickest way to get the data to disk), but I also need to process that data relatively quickly after its been acquired (acquisition rate is 200Hz - 1KHz, but I need to get analysis done at at least 20 Hz). Could I continuously acquire and stream to disk in one loop, whilst periodically reading back some of the data for processing in a second loop (running slower than the daq loop)? Also, has anyone used the "USB signal stream" technology with the USB daq cards? the only thing I can find on it is this from the manual "USB Signal Stream—A method to transfer data between the device and computer memory using USB bulk transfers without intervention of the microcontroller on the NI device. NI uses USB Signal Stream hardware and software technology to achieve high throughput rates and increase system utilization in USB devices." and "USB-621x devices have four dedicated USB Signal Stream channels. To change your data transfer mechanism between USB Signal Streams and programmed I/O, use the Data Transfer Mechanism property node function in NI-DAQmx." Does that mean the DAQmx code still looks essentially the same? I mean I still just use the "DAQmx" read VIs etc as I always have? Anyways, as I say I'm in the planning stage, so I don't have any examples to post... Any thoughts (however random!) would be welcome! Cheers Paul
  12. Hi all, Not wishing to rush people, as I realise people have their own work to do! but I was just wondering if any of you who've downloaded my VI have had a chance to look at it, and have any comments? (I'd be interested in general comments as well as anything relating to my original question!) Thanks!! Paul
  13. Hi all, Thanks for the replies. I've attached the current version of the VI to read and decode data from the device to this message. Just to add an answer to jdunham, I can't use an enum since the numeric values are not a consecutive set... For info, the text ring is usually a strict type def, but I disconnected it in the version I'm sending here (I wasn't sure if, leaving it defined would mean that I had to upload the ctl files as well...). Thanks in advance for any input! Paul
  14. Hi All, This is my first post here, so please be nice! I'm trying to write a device driver in Labview 8.6, to control a device via standard RS232 serial comms. I have the basics down but am looking at the best way to "decode" the data I receive from the device. The overall communications set up is simple, I send a byte array corresponding to the following format: END LGT OBJ PID OII VAL CHK END (The overall meaning is not hugely important here, but basically the OBJ and PID bytes define what the command is I'm sending. Basically, the OBJ defines which command set the command I'm sending is from and the PID value identifies the command in that set.) And the device sends back a response in a similar format. I've written a VI which sends data to the laser - using a text ring control, set up with a text description of each of the commands and the corresponding 16 bit integer value (OBJ and PID combined). I select the command I want to send from the ring and it returns the appropriate number which can be turned into an array of 8bit bytes which can then be inserted appropriately into the command string I send the device. The device responds to each command with a similar byte array again containing the OBJ and PID bytes... I now need a sensible way of converting those bytes back to something more useful (for example a text based description of the reply and other relevant info such as units and things!). I did wonder if the best thing to do may be to build some kind of look-up table based on the values that could come back, but I'm not sure how efficient that really is (or the best way to actually do that!). One alternative I did think of was to use the original text ring that I used to send the commands - by feeding the combined OBJ and PID bytes back to the ring in indicator mode, I get an "indication" of the command that has been executed. I can then use a property node to get the actual text used in the ring for any given command. BUT... this would mean that my supposedly low level VI is having its diagram loaded into memory all the time because of the property node - does this happen even if the indicator is hidden? Is this such a bad thing anyway? IS there a more efficient way of turning the returned bytes into something "more useful"?! (Just for added info, the PID byte also tells me how any values are coded and if the previous command caused a change (i.e. if I sent a write or query type command), hence there's more info than just the command name in it!). If anyone can point me in the right direction on this, be it pre-existing examples or just basic advice, I'd appreciate it! I know I can "make it work", even if I resort to using a nested case structure (top level for decoding the OBJ byte and inner for the PIDs), but I to do it "properly/well"! Thanks in advance for any help! Paul
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.