Jump to content

PA-Paul

Members
  • Posts

    139
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by PA-Paul

  1. Hi All, We're a small company doing a mixture of things, and we use labview to write control software for various automated systems that we supply to our customers. I'm actually relatively new to the company (but have labview experience from my PhD), but I know that source code control has often been mentioned, but never implemented (and, from the stories of my colleagues, it probably should have been!). One of the reasons for this is that, prior to my joining, there has only been one "major" labview application and it was maintained by only two staff members. Now I've joined the team there are 3 of us able to use labview, and we're about to embark on a complete overhaul of that one major application (a complete bottom up re-write of the architecture and a major cleanup of most of the processing algorithms and things that went with it). So, again the topic of source code control has come up. None of us has used it before (at least with labview) although we're familiar with the basic concept, and we don't know where to start - particularly in terms of what software we need. Bearing in mind we're a small team (and from within a small company), we don't want to be spending thousands on licensing of some super dooper massively complex system. What we need is something that prevents "code collisions", probably through checking in and out of code etc and maintains a decent backup of the projects code repository and revision history etc. Can anyone point me in the direction of where to start? Or recommend a suitable piece of software and how to go about starting it up? If we're going to implement source control, ideally I want it in there from the get go on this project, but we do have a pretty tight timeline so I can't spend months learning a whole new thing just to get this implemented... Anyway, as I say, if anyone can point me in a useful direction to get us going I'd really appreciate it! Thanks in advance for any help! Paul
  2. Hi All, Sorry to resurrect this thread, but I've just discovered something relevant which I wanted to add.... In short - Beware the DAQmx read property node when using a USB based DAQ card! The option for where to read from (i.e. relative to first read, most recent etc) does not work well when set to "most recent". In my application, I was acquiring data continuously at a fixed rate and then (at regular but not necessarily fixed intervals) using the "most recent" setting to try to read back only the most recent data point. I was then using this to provide feedback to control a 2nd device which would in turn influence what was read by the daq card. In essence, the daq device was providing data for a feedback loop. I found that my feedback loop, instead of providing a steady state for my system, was oscillating. After much testing and discussion with a colleague, we discovered that it was because the DAQ read was not actually providing the most recent data sample at all, but was always one iteration behind in the feedback loop! It appears that this arises from the way data is transferred from DAQ devices over the USB bus. The card buffers some data "internally", before sending "bulk packets" across the bus to improve efficiency (As opposed to say a PCI based card which uses DMA). When I set the DAQmx read property node to "most recent" and then read back the data, I was getting the most recent data from the DAQmx buffer, and then immediately afterwards DAQmx would go and get the next data from the card. So then, on the next iteration, I still only get what's in the DAQmx buffer which was actually sent from the card on the previous iteration of my feedback loop, meaning my calculations for control were not based on the most recent data at all, but were one loop iteration out - hence the oscillation! There's a thread over on the NI forum which confirms this behaviour for USB based devices. Anyways, as USB devices appear to be becoming more common for DAQ due to convenience, I thought I'd just post this up (not that this thread is really in the right forum for this anymore - any mods want to move it?). Cheers Paul
  3. I was just wondering... how does the "classes" approach lend itself to application building and distributing (i.e. generating exe files etc)? One of the reasons we're looking at using the "call by ref" plugin approach is that we want a top level architecture which (in the case of the new project I'm about to start work on, rather than the tuneable laser I mentioned above) would allow us to add support for new/specific hardware (function generators etc) further down the line and simply provide end users with a new "driver" set for the hardware they're using. If I were to use the classes approach, would I have to include the "drivers" for every type of hardware that might be used? Or could I just supply the ones most appropriate for the specific configuration? Cheers for the info! Paul
  4. Thanks for the hint. My only minor issue being that I've never worked with classes at all and don't really know where to start... I'm a physicist by training and a LabVIEW coder by need (although I enjoy it too, which helps!). Are there any tutorials/resources which explain how to use/write classes? this is something which may also prove useful for another project I'm about to start on, which involves writing control software for a measurement system which will have motion controllers, function generators, scopes etc etc which may also be different "system to system". But if its something that's going to take months to learn how to do properly, I may have to find a simpler solution for the short term. any advice greatly welcomed! Cheers Paul
  5. Hi All, I'm developing some software to control a measurement system comprising of (amongst other things) a tuneable laser. The slight problem I have is that I want the same software to be compatible with different lasers (at present I have 3 different lasers I may want to use with the system). I wanted to use a plugin type architecture, so that I can simply select which type of laser is currently connected and everything will work happily. I've started to implement this by having a set of "actions" that I want the laser to perform (such as switch on/off, set wavelength etc etc.) and having a generic VI for each action which calls the appropriate driver depending on which laser is being used, and this works to an extent. The problem is that I need to perform a rapid sweep of the laser channels, which is typically achieved by running the "set wavelength" vi in a loop with an incrementing input to the wavelength value. Doing this with the "plugin" type VIs takes much much longer than if the driver is "hard written" into the application. I presume this is because of the continual opening and closing of the VI reference, is there a way round this? Is the plugin architecture really the way to go in this case? Any thoughts would be greatly appreciated Cheers Paul
  6. Ok, So my method will work if you only want a single loop application, but Jasonh's solution may be better in some regards - although actually, his code would also have the delay because the bottom loop still has to wait for the ms timer to finish. Anyway - the interval in the approach I posted is the interval at which the temperature is sampled (assuming your temperature sampling is in the "true" case of the case structure). The way I've set it up, don't use a precision higher than 1 decimal place in the interval value as then it won't work properly, if possible I'd stick to using integer values of interval (its a double to avoid coercion dots in the code). If it takes your code any length of time to acquire the temperature, then there will always be a delay (even in my example) between pressing stop and the program actually stopping. I can't say why your code doesn't work when placed into mine, as I can only see a small portion of your code in the screen shot. If you need more help, best thing to do would be post the actual code if you can. Cheers
  7. Not sure I entirely agree with Antoine's solution... Although I can see where he's coming from... However, another reason that there's a delay, is that you can't interrupt the "wait ms" VI. Pressing the stop control simply instructs labview to stop the loop at the end of the current iteration. So when you press it, it still has to do the waiting for the current iteration to finish. So, basically, there will pretty much always be some delay between pressing the stop button and the code actually stopping - its just more noticable in code where you have a time delay within the while loop. Attached is one way you can have code, i.e. your temperature acquisition, occuring at set rate within a loop which runs quicker - allowing you to stop the loop "instantly" whilst still having a slow sampling rate. I seem to remember seeing somewhere that you're using 7.1, I can;t save back that far, the code posted is in 8.6.1, but you can see the block diagram in the attached screen grab: Download File:post-14639-1240840128.vi Hope that helps... Paul
  8. All of the fonts are set to "default".... On playing around - I just found I was set to "current font" in the font selector at the top of the page - set that back to "application" and it seems to be "back to normal" now! Thanks anyway! Paul
  9. Hi all, All of a sudden, everytime I place an item on the block diagram that has a label (i.e. numeric controls, string controls etc) the labe is in "bold" type face. And I can't seem to get it back to normal... There's no tick next to "bold" in the style selector either - am I missing something here? Any help appreciated (although, I'm starting to like it, but I'd still like to know why it's doing it!!) Cheers Paul
  10. If you delete your constant, and then right click on the "Parity" terminal of the sub VI and "create constant" (or control, depending on what you need), I suspect it'll give you an enum list - select the option you want, and the "coercion" dot will disappear... The data type that the subvi is expecting is an enum, not just a simple integer - hence the coercion. Hope that helps Paul
  11. Thanks for the feedback Mark, I'll take a look back over my code with what you've said in mind. I'm going to try to do the other to example exams over the next few weeks, to give myself some more timed practice. Perhaps I'll post them in this thread when they're done... Anyways, thanks again for taking the time to help out, its much appreciated! Cheers Paul
  12. Hiya, Just had a play with this in lv 8.6.1, the following setting for the display format (right click on the indicator, then select "Display format") should do what you want: This does assume you're using a "numeric" indicator to display your data... Hope that helps Paul
  13. QUOTE (Mark Yedinak @ Apr 10 2009, 06:58 PM) 8.5 version attached. http://lavag.org/old_files/post-14639-1239438154.zip'>Download File:post-14639-1239438154.zip Thanks in advance for your comments. In reply to BenD, I have been doing a lot of coding lately, which is why I'm thinking about taking the exam. I am a little worried about the timescale side of things though... I should have plenty of time to keep coding and practicing though as I've just broken my ankle, so will be housebound for a bit and will have a chance to try out the other 2 sample papers! Anyways, thanks in advance Paul
  14. QUOTE (jdunham @ Apr 10 2009, 06:01 PM) Would that affect my writing to the device? If I send 0xC0 and that's the termination character, does it still get sent (my device is expecting a message framed with the byte 0xC0...)? Cheers Paul
  15. Sorry to hijack, but just a question on serial termination characters. I have a device which I have to communicate with over a serial interface. The messages (both sent to and recieved from) the device are framed with the byte 0xC0 (i.e. I send/recieve a byte array that looks like C0010101010101C0, where 010101010101 is my message). Is there a way to Labview to automatically read back only the one message? i.e. something similar to using the "term char" method ShaunR mentions above? At the moment, I read back byte by byte until I've seen both the start and stop 0xC0 bytes... but my code would be neater and possibly more efficient if I didn't have to.... Just a thought. Thanks for any info. Paul
  16. Ok, so I know people have jobs and lives... and I know its probably not good etiquette to bump my own thread... but could any of the 23 people who've downloaded my exam solution (or anyone else who sees this and feels so inclined!) make any constructive comments? As I said in my post above I wasn't entirely happy with the end product, but I did stop at 4 hours to make it realistic. I did notice that the solution on the NI site used the same disable technique to prevent a user selecting two cycles at once... I also spotted (in my code) that I forgot to deal properly with errors - I have an error handling case within my state machine, but in the case of an error, it simply reports it and stops the main state machine loop - leaving the GUI event handling loop hanging on... Anyway, if anyone has any comments on style or approach, please let me know! Thanks Paul
  17. QUOTE (postformac) The way I understand it, labview isn't actually converting to ASCII... you're actually entering ASCII text into the string, and in the background labview stores/sees/uses that as the associated byte value. Hence, when you send a string containing 05, you're actually sending the byte corresponding to the ASCII value "05" that you entered. If you right click on any string indicator or control or constant you will see there are options in the menu of "Normal display; \Codes Display; Password display and HEX display. These set the way labview "displays" what you enter in the string in the case of indicators and sets how labview reads what you've typed into controls and constants. For example: In this picture, I've typed "05" into a string control set to "normal" in the top half, the 4 indicators next to it are set to display in the four modes as labelled (all 4 indicators are wired to the one control). The second control is set to display as HEX, and again I've typed "05" and the indicators are set up as for the first control. When I run the code, the indicators each read back from the relevant control. You can see that "05" entered as ASCII displays as "05" in normal but "3035" in Hex. Enter "05" in HEX mode though and it displays as 05 in hex, but as an undefined ASCII character in normal. In this example, codes shows the same as the normal since I've not entered any special characters. Anyway, so if you want to enter the byte value "05" you must type it into a string control/constant set to "HEX" display. The alternative is to create a byte array, allowing you to work with actual numeric data (helpful for the sort of thing crossrulz was talking about) and then convert that to the appropriate string to send over the serial bus... Hope that helps, I remember struggling with the same thing when I first had to start playing with serial communications (having done some GPIB stuff previously which was all done in legible words!). Sorry, just seen the benefit of ShaunR's method - it allows you to mix ascii strings and specific bytes in the same string control/constant... So that might be a good way to go if you're sending a mixture of things over the interface... Cheers Paul
  18. Hi All, Not sure of the best place to post this, so if a mod thinks it should be elsewhere then, could they move it and let me know? Anyways, I'm thinking of taking the CLD exam at some point in the not too distant future. I just sat down and did the car wash example exam. I've attached my final code (its in labview 8.6.1). Could anyone spare a few minutes to go over it and make any suggestions of improvement (or just general comments, anything really!)? I know there's a couple of things I wasn't 100% happy with - for example using the "disable" property node to prevent the user selecting another purchase option midway through a cycle seemed like a bit of a cheat to me! I also wanted the position switches to reset automatically - i.e. if you set one true, the others should all set to false. But I didn't have time to do anything different on these... Thanks in advance for your help and comments! Paul Download File:post-14639-1238872409.zip
  19. Hi All, Thanks for the replies. I originally did use the "modify the driver" approach, but then I have to supply a modified driver with my system and make sure its up to date if FTDI release new drivers for future versions of windows etc etc. Which is what I wanted to avoid. Minh - The problem with your suggested approach is that as soon as I close the device using the FT API "stuff" in order to use VISA, the setting is lost. I emailed FTDI and was told the problem with the device "losing" the setting I set with the FT_SetLatencyTimer subVI was that that VI uses the D2XX driver version whereas the VISA communication is done through the VCP (Virtual COM port) driver, hence the reset to default values. FTDI suggested either using a modified driver file or modifying the registry entry. Ultimately, I've gone for the latter approach now. So at the beginning of my application, I check the registry entry for the driver settings, if its not set to what I want (2ms), I modify the key in the registry (I can do this now I've found labview's registry editing VIs!). Obviously, for each machine I should only have to write the registry value once (per FTDI device, since the registry key is device specific), but I guess its still best to check it on each run of the application. Thanks again for your suggestions! Paul
  20. Hi All, I have an FTDI USB-RS232 (FT232R) converter chip in my "system" to enable control of a serial device over usb. In general, it works fine, in labview I'm just treating it as a standard serial device (Using the device's "virtual com port" drivers) through VISA. However, the default driver settings for this device include a 16 ms latency. I need to send a set of commands as quickly as possible to the device, so I want to set this to a more reasonable 2 ms. I can do this quite happily through the device manager, but I can't really ask/expect an end user to do that! One way I've found to solve this is to create a custom version of the drivers with the default latency set to 2, but that also causes me some minor issues as the drivers are then no longer certified etc etc and it would be much nicer if the user could simply install the standard drivers (especially incase FTDI change the drivers in the future). So, I wanted to find an alternative... I discovered that FTDI do have a set of labview "drivers" for the product, which appear to allow me to do such things. I can happily communicate with the FTDI device using these and do things like find the currently assigned COM port number (which is nice!) and I can set the latency. The problem is as soon as I close the connection with the device to let VISA take over and deal with the communication, the setting seems to revert to the driver default of 16 ms. I have traced the problem to VISA - if I open a connection to the device using the FTDI drivers, set the latency, close the connection, re-open the connection and check the latency, the value is maintained. If, however, I open the connection with the FTDI drivers, set the latency, close the connection, open the port with VISA, close the port with VISA and then recheck the latency, it's reverted to the default value. Does anyone have any experience with these devices and know if there's a way of actually programmatically modifying the driver or making sure VISA uses the last settings I sent the device? I suppose my only other alternative is to ditch VISA for this and re-write my application (or at least my device drivers for the thing I'm controlling) using the FTDI dll based code... but I'm not so sure that's such a great idea. Any help or advice would be gratefully received! Thanks Paul
  21. Hi all, Don't suppose anyone has any thoughts on this? I still can't see why the cluster breaks... Any help would be most gratefully received! Cheers Paul
  22. Ok, So I've done it now! Its amazing what you can achieve when you read the instructions (NI Howto page), although, to be fair it would help if the picture on that page actually showed you what it should look like!! Attached is my working version just for you all to be amazed and astounded by... or not of course! Download File:post-14639-1236531223.vi Thanks! Paul
  23. Hi All, I'm trying to emulate a fairly standard error dialogue type behaviour, but failing miserably! Basically, I want an error dialogue which says "such and such failed" and then has options for "Retry", "Abort" and "Details", retry and abort should exit the dialogue and pass a boolean indicating whether to retry. "Details" should expand the viewable area of the dialogue to show the error source text. Clicking again on details should return the dialogue to its original size (hiding the source text). Attached is my current version (LV 8.6.1), which is close, but not quite right. Clicking on the "details" button will increase the size, but I can't get it to go back! Any help, greatly appreciated! Download File:post-14639-1236521393.vi Thanks! Paul
  24. Hi, I like Jing... not seen that before, my turn! So, I still have a problem! I've done much the same as you did in the video, only my refnum control resides within a strict typedef cluster... this cluster is what breaks if I change the tab control type def. as in the video below Download File:post-14639-1236510645.swf Any more thoughts as to why this breaks? (also, how did you embed the video into your post so its visible without having to click an icon like mine?) Thanks!
  25. QUOTE (crelf @ Mar 7 2009, 07:18 PM) Ok, so attached is a simple example, I've thrown a couple of controls onto a front panel. The tab control is a strict typedef (file included in the zip). I've also placed a cluster of refernums (also a strict typedef) with a reference to each control, including the tab. The refnum for the tab control was created by placing a control refnum on the FP of the cluster control, and then dragging and dropping the typdef'd tab control into it. The problem I have is that now, if you make any change to the tab control typedef, the refnum cluster "breaks"... why?! As it happens, I now realise there's no need for me to make my tab control a strict typedef as its only used once anyway, so its not such an issue. But I'd still be interested to know why the above happens! Thanks for your help/advice! Paul
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.