Jump to content

Brant at VIE

Members
  • Posts

    10
  • Joined

  • Last visited

LabVIEW Information

  • Version
    LabVIEW 2018
  • Since
    1995

Brant at VIE's Achievements

Newbie

Newbie (1/14)

  • Week One Done
  • One Month Later
  • One Year In Rare

Recent Badges

0

Reputation

  1. I am hoping all of the collective knowledge of LVOOP here can help me out. I'm having an issue with dynamically calling external child classes from an executable. The parent classes is currently build in the exe. When saving those child classes with all dependencies, the parent class is also saved in the hierarchy. I think I know why, but a some discussion on why this occurs would be helpful. The real problem is that when running the executable the child classes see the path of its parent has changed (i.e. in the executable) and is broken. Is a possible fix to make the parent class dynamic as well? Wondering if anyone has tried this before, I go down this route. I also wouldn't like to as the parent should never change, unlike the child classes, but that may be more of an academic argument at this time. Any ideas would be helpful. Thanks!
  2. I am not, but have always wanted to go. Good to see open wheel back to Mid-Ohio. I like the IRL move to road coarses. Those street parades (St. Petersburg) - I could do without. I just noticed that it is an IRL/ALMS weekend. Should be great.
  3. Does anyone have any opinions on using one of NI's USB DAQ devices for PID control? We have a customer that might need to move from an E-series PCI card to a USB option. I have used PCMCIA cards in the past and have not been happy with the performance, but have never used a USB device before. I'm looking at the USB-6221 in particular, but would like to hear any opinions on CompactDAQ as well. Thanks. Edit: The PID loop rates are on the order of 5 Hz.
  4. I was wondering if anyone could help with a Vision Runtime with TestStand license question. I have asked NI this question 3 times and have come up with 2 different answers, but I'm hoping someone here has some hands on experience or at least discuss why this shouldn't work. First, I would like to use NI Vision functions in a deployed TestStand 3.5 "application". The TestStand is a deployment license and using the LabVIEW 8.2 Runtime as the adapter. On the customer computer, LabVIEW Professional 8.2 and Vision Development 8.2 is also installed, but deactivated. I would like to use the Vision Runtime license which is activated, however, I get a not a valid license error after deactivating the Vision Development on the customer PC. I have tried this on my work PC, and the vi works fine with the Vision Runtime activated and Vision Dev deactivated. I'm in the process of creating a new VM in Parallels to test this in a cleaner environment. Anyway, so far one NI sales person says it should work, one sales person says it shouldn't work, and tech support also says that using the Vision Runtime is not valid in this configuration. Purchasing Vision Dev license is an expensive option. Buildng a dll in LabVIEW might work as well. What have any of you guys done in this situation? Thanks.
  5. Sorry about the late introduction. However, I would like to introduce myself. My name is actually Brant and have been using LabVIEW for about 11 years (from versions 3.1 to 8.2). A disclaimer, I am employed by V I Engineering in Indianapolis, IN. Any other Indiana LV users out there, maybe we can restart the Indiana LabVIEW user's group. I hope to ask more questions here, really tired of getting AEs at NI tech support that seemed to only heard or read of LabVIEW. I know, they are trying to get through the day like the rest of us. Anther disclaimer, crelf is my boss, so do not be alarmed if many of my postings are "I agree, crelf!" or "Great advice, crelf!"
  6. You don't "need" to name queues either. In fact, I usually only do this to make the code more readable. But, the ability to do so makes them much more powerful. Also, queues are objects -- data, methods, encapsulation. Anyway, I don't want this to turn into a thesis on what makes queues wonderful. I do see the points of those who prefer the NI implementation. It makes objects as easy as any other data structure in LV. I wouldn't want to malloc each array either. It is just a personal preference, I guess. As Dave said, maybe the next version will have the features that some of us feel is missing.
  7. This was the exact reaction by myself and co-workers as well. In fact, someone even used the term "glorified Cluster". At first, being able to create classes from the project browser, inheritance, and and the ease of use looked promising. The lack for pass by reference and not being able to name instances is a big drawback. In fact, I'll be sticking with the Endevo stuff (disclaimer - my employer sells this toolkit in the US). I guess the current implementation is closer to the data flow paradigm. However, queues break this thought as well and is one of the most powerful features in LV. NI, you were so close at getting GOOP right. At least LV8.2 loads faster.
  8. Since I also have been looking for a Mac OS X version, I followed the link and looked around. Windows only. So why doesn't NI have LV evaluation for all platforms? I would think that the Mac or Linux version would be something that even current users would like to evaluate.
  9. The reason I was given was that RT couldn't auto negotiate at full duplex correctly. Therefore, packets are lost, collide etc. It looks from several discussions out there that 7.1.1 RT update solves this problem. I couldn't find the original article, but if you do a google search for 2IHG7FZ8 and have google translate the page for you, it gives you an idea (in broken english) what your problem might be. If someone out there can get the same article in native english that would be great. Hope this helps.
  10. In my experience TCP communication in RT has some serious issues, but I haven't benchmarked anything after 7.0 so my advice might be out of date... But have you tried setting the windows box to half duplex or a better method using a 10 mbit hub (not switch) in the path between the RT controller and the host? I resisted this insane idea for the longest time and when I finally did it, the communication between the host and RT was actually faster. Also, the RT TCP stack had some performance issues. At one time the Maximum TCP Transfer vi's would report my max bandwidth to be about 504.56 kbytes/sec. It was easy to point to RT as this controller still had Windows 2k installed and could easily surpass this max bandwidth. Again, hope this isn't too out of date to be helpful. Please keep us informed. I was hoping NI had fixed some of these TCP issues in RT by now.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.