Jump to content

Neil Pate

Members
  • Posts

    1,194
  • Joined

  • Last visited

  • Days Won

    112

Everything posted by Neil Pate

  1. I think the MDF directory is used when you build an installer which needs additional components. LabVIEW first looks in the MDF directory to see if a copy of the other component installer is there.
  2. So the ProgramData\National Instruments directory is getting very big on my disk, approaching 80 GB with the Update Service and installers (MDF). This along with with 40 GB in the NIFPGA directory is getting a bit silly. My primary disk is a nice fast SSD, but it is not huge. As such I want to move as much of this stuff off the c:\ as possible. I have moved the Update Service stuff onto bigger (mechanical) HDD, this is possible by changing the preferences of the NI Update Service. I don't yet know if this has actually worked, as I manually moved all the files after changing the preferences. Does anybody know if it is possible to move the NIFPGA and ProgramData\National Instruments\MDF somewhere else?
  3. Yes, that's what I normally do (keeping track of whether we have wrapped around or not etc). I have never done any benchmarks, always just done it that way. Quite often my circular buffers have a method to read n samples rather than the whole buffer, so I find I need to keep track of a separate read pointer.
  4. Nice work. This is such a good use of XNodes adapting to type behaviour. Couple of questions though... In the Read, there is a Rotate 1 D array, is this not quite an expensive operation? Could that could be removed if a Read Pointer was introduced?
  5. Not to the best of my knowledge on the "older" cRIOs. The brand-spanking new ones do actually use Linux (the old ones use VxWorks or Pharlap for their RTOS) so perhaps that does have a shell you can ssh into or something. Hoping to try this out soon
  6. The LabVIEW beta normally rolls out around about January or February.
  7. We do what we must because we can.
  8. I have seen things like this where "stale" classes were inside Diagram Disable Structures, everything generally ran fine until one day the build just broke for no apparent reason. May be totally unrelated to your problem though...
  9. You have a Starcraft II avatar, already that gets you points in my book :-) I have actually approached the problem slightly different, as I did not like the Strategy object needing to do the VISA read, and due due to the asynchronous nature my device sends data (all on its own it periodically sends data). I have implemented the Received data (and the parsing thereof) as a type of Strategy pattern, but the actual reading of the characters on the serial port is done somewhere else.
  10. As someone wiser (and more sarcastic) than myself has already pointed out... Some people, when confronted with a problem, think “I know, I'll use regular expressions.†Now they have two problems.
  11. I like to try and extend my understanding of things wherever possible, so I can make informed decisions later. This often means trying out features or design techniques I have not used in the past to see if there are better ways of accomplishing things. This is how I have evolved my style over the years. Sometimes the experiment works, sometimes it does not, but I always get to keep some knowledge from the exeperience. One thing I am trying to get my head around is proper OO design (forget LabVIEW for now). This is something I have some understanding of, but could certainly do with more practice; hence the original question. I agree now with Shane that this looks a lot like the Strategy Pattern.
  12. the kool-aid is nice this time of year
  13. Hi All, I have a set of classes representing an instrument driver which allows for different firmware versions. The instrument can operate in certain modes, and depending on the mode it periodically returns a different number of characters. What I would have done up till now is have a "mode" enum in the parent, and have a single Read method and inside that using a simple case structures read a different number of bytes depending on the mode, and then parse the string accordingly. No problem here, very simple to implement. What I want to do now is remove the enum, and make it a class. (it is my understanding that have type-defined controls inside a class can lead to some weirdness). So I figure I create a mode class (and child classes corresponding to the different modes my instrument can be in), and then at run-time change this object. Each of these mode child classes would implement a Read function, and they would know exactly how many bytes to read for their specific mode. This seems a bit weird as I would be implementing the Read function in the Mode class which does not feel like the right place to put it. Alternatively I can implement a BytesToRead function in each of the Mode classes and then also a Parse method. Does this sound sensible? Is this going to be complicated by the fact that my actual class holding the mode object is an abstract class?
  14. If you truly want to do this from LabVIEW do yourself a favour and get a serial sniffer and capture a download that has been successfully done via the AVR tools. Doing this kind of thing from first principles is, in my experience, quite a bit of frustration finally followed by extreme satisfaction when it all works nicely. Good luck if you are trying to write the bootloader, expect much suffering!
  15. Thanks for the help everybody. I think I am going to take a little bit of time to digest this all.
  16. That should work, it certainly works fine on my PC. It is the technique used by the nifty Show VI In Folder quick-drop plugin http://decibel.ni.com/content/docs/DOC-22461
  17. I am getting there... slowly... sometimes I think my understanding of something is like a really strong permanent magnet that resists all attempts to change it, but once it is finally done it is done for good . At the moment my understanding is resisting change! Can anybody furnish me with a simple example of when Preserve Run-Time Class is recommended/necessary?
  18. Thanks for the explanation Shane; I am slowly getting the picture. Am I right in thinking that Preserve Run-Time Class only really makes sense when there is a sub-VI involved? I know the help does state this in the very first line, but I must have glossed over this.
  19. Just to briefly revive an old thread, I still do not really fully understand the difference between To More Specific Class and Preserve Run-Time Class can somebody check my understanding here for this particular scenario I have a class hierarchy, say parent class A and child classes B and C. Actual class is only determined at run-time, so my wire is of class A Under certain conditions I wish to run a method that is only implemented in class B so I should use To More Specific Class, rather than preserve run-time class Is this correct? This is pretty much how I have always done it in the past, I thought it would be worth re-visiting incase I am not quite using things correctly.
  20. I build to a temporary directory (like c:\t) if I run into these kind of problems. It is a pain...
  21. Jack and Michael, if you are "listening", I think this kind of stuff would make a great topic for VI Shots live.
  22. I also see the strange chars, but mine are replaced by a ? glyph/character in chrome
  23. Interesting, but I am not sure how these are better than using the Get/Set Variant Attribute prims. Have you benchmarked these compared to the prims? I suspect the difference will be very little difference, but am curious anyway... Edit: your VI does not actually do very much, did you perhaps forget to save default values on the variants or am I missing the point completely? I tried to add a variant attribute using the standard LV prim, and then read it back using the library call, but I could not get this to work ("an exception occurred in the external code").
  24. Simple example attached (LV2013). The callback VI remains reserved after the main VI has run. Any ideas what I am doing wrong here? Note: I have just tried to run the "Passing Data to a .NET Event Callback" shipping example and the same behaviour exists (i.e. works fine in LV2012, but the callback remains reserved in LV2013). Test5 Callback.vi Test5 Callback.vi
  25. Hi All, I am dusting off some old code I use to get a Windows Notify Icon running. This code was originally developed in LV2009 I think, but recently used in a project in LV2012 with no issues. For convenience I have wrapped up the .net NotifyIcon API in several LabVIEW classes. My test code works fine, but I have some strangeness with the event callback VIs (these are invoked whenever you interact with the notify icon). Specifically, when I run the code my callback seems to change context from my project to something called <W>, and is in a reserved type state as shown in the second picture. When I stop my main VI this VI does not come back out of this state. This causes my classes to be locked in the project (reason for locking is they are apparently reserved for another application instance, which I can only presume is because of the reserved callback VI). In LV2012 when I run my main VI the callback VI does not get reserved like this, and when the main VI finishes everything goes back to normal. Any ideas here? What is the <W> context and why is it on localhost rather than My Computer?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.