Jump to content

GregFreeman

Members
  • Content Count

    302
  • Joined

  • Last visited

  • Days Won

    5

GregFreeman last won the day on April 19 2018

GregFreeman had the most liked content!

Community Reputation

16

About GregFreeman

  • Rank
    Extremely Active
  • Birthday 09/13/1986

Profile Information

  • Gender
    Male
  • Location
    Texas

LabVIEW Information

  • Version
    LabVIEW 2013
  • Since
    2008

Recent Profile Visitors

2,094 profile views
  1. Nope, we have explicit, static registrations for controls in our subpanels. Nothing dynamic or using references. We did potentially trace things back to our error logger. We have a subVI that just throws errors in a queue and then a process that flushes the queue every few seconds and logs them. Seems when we aren't logging errors the problem goes away, so I'm not really sure what's going on. Possibly something there blocking our UI thread somehow and events getting missed but the flush and write happens within 100 ms so it definitely still seems a bit strange. Right now it seems the problem may not be happening in the executable and FWIW we are also updating a good amount of controls by reference with defer panel updates set to true. But the VI Analyzer shows this only taking ~50 ms so I'm not convinced that's the issue either.
  2. I have an application that is using subpanels, and I am having some issues where they seem to be missing button clicks. This is sporadic and not isolated to one specific button but seems to happen throughout the application on various screens. I look in the event inspector window and the value change never fires when these events are missed, so it's as if whatever is managing these events is missing it completely. After two or three clicks it will take. I know it's tough to help without a reproducible case, so I am mostly posting this to see if others have run into this behavior because I have been spinning my wheels.
  3. Thanks very much everyone. And I really appreciate the UML -- very helpful
  4. Sorry for the ambiguous title...hard to convey the problem without description. Right now I have an application that takes various measurements, but for now I'm going to focus on current. The issue is that there are many devices that can take current measurements, which our customer will swap out. But they don't necessarily have a parent/child relationship. Everything I think of screams to me "This would be easy with interfaces" but I'm really hoping there is some solution with composition. The application has a lot of situations where there is a power supply, but they may use a DVM to take the current measurement because they get better resolution on their results. There are other times they just use the current measurement the power supply reports. So, I thought about having a current measurement class that is composed of a "Source" object and a "Measurement" object. In some cases the Source and Measurement objects would both be a power supply. In other cases one may be a power supply and the other a DVM. But, I also want all my power supplies to inherit from BasePowerSupply and the DVM to inherit from BaseDVM. But if I want to do this and have either a power supply or a DVM be the measurement class, they both have to inherit from the same base class with a MeasureCurrent must-override method. However, as soon as I do this, if I want to create a a Source class, the BasePowerSupply can't inherit from it too. To me this just screams having an ICurrentMeasurable interface and an ISourceable interface that classes can implement. But alas, I cannot. So, any suggestions are appreciated. It just seems to me the more complex instruments get the more I really want interfaces to keep overlapping functionality required while avoiding coupling of unrelated devices through inheritance.
  5. For some reason this isn't working for me on Windows 10. Any thoughts? I've installed the latest version and already had the 2016 runtime installed.
  6. SmithD's response seems to be the general consensus I think. Mark, this is a good quote I'm stealing "if more than one class uses a typedef than is belongs to neither " Interesting about the translation classes. Translating the types was actually something I considered, but then I ruled it out because I thought I'd end up with too many types that were essentially duplicates of each other. I'll take a look at his presentation if I can dig it up. I started thinking about other languages such as C# and how they would handle this. I realized most methods would return classes or interfaces, not structures. And I started thinking about why that would be decoupled, and it's because the classes being returned are not owned by any other class. So this gave me my answer. Make sure the typedef isn't owned by any other class, and it effectively just becomes a POCO.
  7. I currently have a project that I am refactoring. There is a lot of coupling that is not sitting well with me due to typedefs belonging to a class, then getting bundled into another class which is then fired off as event data. Effectively, I have class A with a public typedef, then class B contains ClassA.typedef and then class B gets fired off in an event to class C to be handled. Class C now has a dependency on class A which is causing a lot of coupling I don't want. For my real world example I query a bunch of data from our MES, which results in a bunch of typedef controls on the connector panes of those VIs. Those typedefs belong to the MES class. I then want to bundle all that data into a TestConfig class and send that via an event to our Tester class. But, now our tester has a dependency on the MES. I see a few ways to handle this. First is move the typedefs currently in the MES class, to the TestConfig class. The MES VIs will now have the typedefs from the TestConfig class on their connector panes, but at least the dependency is the correct "direction." Or, I can move the typedefs out of classes all together, but then I am not sure the best way to organize them. Looking for how others have handled these sorts of dependencies.
  8. For completeness, this is the c# code where I'm now seeing matching (slow) timing numbers. namespace TestAdodbOpenTime { class Program { static void Main(string[] args) { Stopwatch sw = new Stopwatch(); for (int i = 0; i < 30; i++) { ADODB.Connection cn = new ADODB.Connection(); int count = Environment.TickCount; cn.Open("Provider=OraOLEDB.Oracle;Data Source=DATASOURCE;Extended Properties=PLSQLRSet=1;Pooling=true;", "UID", "PWD", -1); sw.Stop(); cn.Close(); Marshal.ReleaseComObject(cn); int elapsedTime = Environment.TickCount - count; Debug.WriteLine("RunTime " + elapsedTime); } } } } Output: RunTime 218 RunTime 62 RunTime 47 RunTime 31 RunTime 63 ...
  9. EDIT: You might be spot on smithd. I added Marshal.ReleaseComObject(cn) in my for loop and the times match almost perfectly to the LabVIEW ActiveX implementation. Just confused if that is being called under the hood of the open somehow, how the close connection would work. That reference would then be dead. That's one thing that makes me thing this may be a Red Herring. That's definitely a good thought that didn't cross my mind. I changed the LabVIEW code to leave the connections open but still no luck.
  10. I think I have found a fundamental issue with the DB Toolkit Open connection. It seems to not correctly use connection pooling. The reason I believe it's an issue with LabVIEW and ADODB ActiveX specifically is because the problem does not manifest itself using the ADODB driver in C#. This is better shown with examples. All I am doing in these examples is opening and closing connections and benchmarking the connection open time. Adodb and Oracle driver in LabVIEW. ADODB in C# namespace TestAdodbOpenTime { class Program { static void Main(string[] args) { Stopwatch sw = new Stopwatch(); for (int i = 0; i < 30; i++) { ADODB.Connection cn = new ADODB.Connection(); int count = Environment.TickCount; cn.Open("Provider=OraOLEDB.Oracle;Data Source=FASTBAW;Extended Properties=PLSQLRSet=1;Pooling=true;", "USERID", "PASSWORD", -1); sw.Stop(); cn.Close(); int elapsedTime = Environment.TickCount - count; Debug.WriteLine("RunTime " + elapsedTime); } } } } Output: RunTime 203 RunTime 0 RunTime 0 RunTime 0 RunTime 0 RunTime 0 RunTime 0 RunTime 0 RunTime 0 Notice the time nicely aligns between the LabVIEW code leveraging the .NET driver and the C# code using ADODB. The first connection takes a bit to open then the rest the connection pooling takes over nicely and the connect time is 0. Now cue the LabVIEW ActiveX implementation and every open connection time is pretty crummy and very sporadic. One thing I happened to find out by accident when troubleshooting was if I add a property node on the block diagram where I open a connection, and if I don't close the reference, my subsequent connect times are WAY faster (between 1 and 3 ms). That is what leads me to believe this may be a bug in whatever LabVIEW does to interface with ActiveX. Has anyone seen issues like this before or have any idea of where I can look to help me avoid wrapping up the driver myself?
  11. This may be the difference. I am currently using: vi.lib\addons\database\NI_Database_API.lvlib. This particular project is LV2013, ideally soon to be rolled forward but for now we're stuck with that version.
  12. I am running calls to a various stored procedures in parallel, each with their own connection refnums. A few of these calls can take a while to execute from time to time. In critical parts of my application I would like the Cmd Execute.vi to be reentrant. Generally I handle this by making a copy of the NI library and namespacing my own version. I can then make a reentrant copy of the VI I need and save it in my own library, then commit it in version control so everyone working on the project has it. But the library is password protected so even a copy of it keeps it locked. I can't do a save as on the VIs that I need and make a reentrant copy, nor can I add any new VIs to the library. Does anyone have any suggestions? I have resorted to taking NIs library, including it inside my own library, then basically rewriting the VIs I need by copying the contents from the block diagram of the VI I want to "save as" and pasting them in another VI.
  13. Well...didn't fix it per se. But, we did a build WITHOUT normalizing the string array (i.e. no code changes) and it's using drastically less memory in the EXE than the dev environment....We're talking like 600 mb of memory usage instead of 2.5 gb. My guess now is having debugging on in some of these VIs is causing issues in the dev environment. Probably copies everywhere. Either way, normalizing things was a much more memory efficient way of doing this and is a needed improvement. Rather than have 24 classes each with 80k strings, many of which are duplicates, we have 24 classes each with about 2k strings, and we have 80k integers that point to an index from that string array. As much as I'd love to dig into the LabVIEW memory manager to truly understand what's happening in the dev environment (not), I am just going to put this in the "no longer a problem" column and move on.
  14. Alright, I rolled back to a "bad version", I grabbed this snippet off the idea exchange and I'm going to run it on all my classes. I'll see what happens...
  15. I normalized my data but was still seeing awful use of memory. Upwards of 3 gb, and I would randomly get a copy that would give me an out of memory error. So, i went into my project settings, unmarked everything that had compiled code separated. Cleared compiled object cache. Did a save all. My memory usage has dropped from 3gb with tons of seemingly unnecessary copies to 1 gb just by doing this. I have on and off seen some very bizarre issues with classes and separate source from compiled, and even with that setting I still get lots of dirty dots anyways which isn't buying me much. I think I'll be staying away from it in the future.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.