Jump to content


  • Posts

  • Joined

  • Last visited

LabVIEW Information

  • Version
    LabVIEW 2020
  • Since

Recent Profile Visitors

1,170 profile views

IpsoFacto's Achievements


Newbie (1/14)

  • Conversation Starter Rare
  • Week One Done Rare
  • One Month Later Rare
  • One Year In Rare

Recent Badges



  1. Okay, this is it, I'm really going to figure out how to dynamically load my instrument classes using a factory without carrying every specific instrument class in memory when compiled. I've tried and failed in the past but this time I'm going to spend however much time it takes to get it working. Someone critique this architecture and tell me how far off base I am. I have a base Hardware class that has must overrides to instantiate the communication to the instrument, configuration of the instrument by passing in a JSON strong of config arguments, and deconstruction of the instrument. I then have interfaces to represent generic instrument types, DMM, Switch, Digital Input, etc, that have overrides for their API. Then I have specific hardware that inherits the appropriate interface(s). A keithley DMM inherits DMM, an NI MIO device inherits DI, DO, AI, AO... I have a hardware manager class that acts as a factory, instantiating the classes using a config file that I want to include a key inside of that points to the specific class on disk. The hardware manager is passed to operations that use the instruments and they interact with them by casting to the type of interface they want. Heres some hurdles I'm having trouble conceptualizing. I have some applications that run in different labs but perform the same task with different hardware. For example, in one lab they use a Kiethley DMM with a built-in switch card, in others they use a Kiethley DMM and an NI Switch card. In the first case it's one instrument that inherits DMM and Switch, in the other it's two instruments. I guess I could have two config entries, one for DMM and one for Switch and have the factory compare addresses on instantiation and if it's already initiated communication on an address it just points to the first instance? And the biggest question, if the above is workable, any tips on how to get from a case structure containing every possible specific hardware class to dynamically loading them disk without putting them all into some subVI somewhere to guarantee they're loaded in memory during compile time?
  2. We're going to have a LabVIEW position opening up on our test team here in Raleigh, NC. Open to relocation, so if you've dreamed of moving here, now's a good chance. This is entry to mid-level engineering position. Good company with strong management team that actually supports test and is enabling us to do some of the cool parts of the jobs and not just churn out cookie-cutter testers from templates. Shoot me a PM and I can send you an email to send your resume to. https://careers-cree.icims.com/jobs/7479/test-engineer/job?mode=view
  3. Yeah, so I figured it out. It was me being clever. Get's me every time. I already had this framework working on PC applications really well with no issues so I stripped out the non-cRIO stuff and plopped it in RT and let 'er rip. To make things really easy on myself, I created a setter/getter template for the internal data of the classes that relied on the front panel indicator for the name of the property and it's type. Front panel indicators don't exist in RT unless you're running it in the IDE or in debug mode. I spent five hours on my stupidity yesterday. Brian Kernighan wrote "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?" and it rings ridiculously loud today. I just need to get better at automating VI building to handle my templates so I stop trying to take shortcuts there.
  4. I've got some weird stuff going on with a cRIO project I'm working on wanted to get some opinions on it. The basic architecture is a set of classes that do some process. That process registers with a server. The internal data of the process is held in a DVR and the server get's access to that DVR. Clients use TCP to ask the server to do something, the server makes a call against the classes DVR and returns a response to the client. To simplify the issues I'm seeing I created a class that internally just increments an integer every 500ms. The client asks the server what's the current count, the server asks the Counter class and returns the answer to the client. This works perfectly fine when running the VI in the IDE. When built it connects, will get the JSON message back, but always gets a default value from the DVR call (zero, in this case). As soon as I open a remote debug panel to the cRIO, everything is working. The count is correct, the client calls work, just like normal. As soon as I right-click, close debug, it goes back to zero. Open debug works, close debug, back to zero. I know the DVR isn't getting dropped because the count continues to increment while not in debug, the process is still running happily with no issues. Here's a few screenshots of the code; Count Class process (get the count, increment, write it back to the DVR) - Counter Class process You can see the DVR vi's are actually vim's using a cast. I can't imagine that's the issue. Server Side call - Server Side calls All this does is get the count from the DVR (same as above) and wraps it in JSON and passes it back to the client as a JSON string. I also implemented an Echo class that ignores the process and DVR's, it just takes whatever string you sent form the client to the server and passes it back with a prepended "@echo". This works when running as an executable with the debug turned off so I know the client, server, and the server/class calls are all working as expected. Any thoughts here would be welcome, thanks. edit: I added the any possible errors coming from the variant cast to the JSON reply. When the debug is open there are no errors, when the debugger is closed it throws error 91, but the in-place element structure reading the DVR does not throw any errors. How can a variant not exist until a debugger is opened and than it magically exists? edit: the internal data dictionary is a wrapper around a variant attribute, I wired out the "found?" terminal all the way out to the JSON reply and if the debugger is open the attribute is found, but not if the debugger is closed. Anyone have issues with Variant Attributes in Real-Time?
  5. Hooovahh, Thanks for the reply. Unfortunately my settings object has in it's class data other objects. I tried collapsing the objects to XML however one of these objects can grow rather large and I get the 1803 error when expanding it from XML.
  6. Is it possible (using Scripting, I assume) to modify the default value of an object programmatically? I have an object whose default value is loaded at runtime to represent some system settings and I've created a UI to modify those settings. Currently I manually take the settings and copy them into the object and right-click -> Set as Default Value, than save it. Was wondering if there was a way to do this programmatically?
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.