Jump to content

GregFreeman

Members
  • Content Count

    311
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by GregFreeman

  1. Got it, this makes sense. Thanks! I suppose from this output when I create 'c file I just didn't correctly reverse engineer the data structures in terms of what LabVIEW wanted. TD1 has a TD2 not a TD2Handle. Looking at it again with your clarification makes sense typedef struct { int32_t numPorts; LStrHandle Sn; } TD2; typedef struct { int32_t dimSize; TD2 elt[1]; } TD1; typedef TD1 **TD1Hdl;
  2. I've taken this one step further, because I realized I will need to return more than just an array of strings, but instead return an array of clusters. I have modified the dll, and sprintf statement seems to output the correct values, but I'm getting garbage back in LabVIEW. My best guess is it has something to do with what my handles are pointing at, but I haven't been able to figure out the issue. /*Free an enumeration Linked List*/ void EXPORT_API iir_usb_relay_device_free_enumerate(IIR_USB_RELAY_DEVICE_INFO_T* info) { //usb_relay_device_free_enumerate((struct usb_relay_device_info*)info); IIR_USB_RELAY_DEVICE_INFO_T *t; if (info) { while (info) { t = info; info = info->next; free(t->serial_number); free(t); } } } static MgErr resize_array_handle_if_required(DevInfoArrayHdl* hdl, const int32 requestedSize, int32* currentSize) { MgErr err = mgNoErr; if (requestedSize >= *currentSize) { if (*currentSize) *currentSize = *currentSize << 1; else *currentSize = 8; err = NumericArrayResize(uPtr, 1, (UHandle*)hdl, *currentSize); for (int i = 0; i < *currentSize;i++) { *((**hdl)->elm + i) = (DevInfoHandle)DSNewHClr(sizeof(DevInfo)); } } return err; } static MgErr add_dev_info_to_labview_array(DevInfoHandle* pH, const IIR_USB_RELAY_DEVICE_INFO_T* info) { MgErr err = mgNoErr; int len, i = 0; (**pH)->iir_usb_relay_device_type = info->type; len = strlen(info->serial_number); err = NumericArrayResize(uB, 1, (UHandle*)&((**pH)->elm), len); if (!err) { MoveBlock(info->serial_number, LStrBuf((*(**pH)->elm)), len); LStrLen(*(**pH)->elm) = len; } return err; } static void free_unused_array_memory(DevInfoArrayHdl* hdl) { int n, i = 0; DevInfoHandle* pH = NULL; if (*hdl) { /* If the incoming array was bigger than the new one, make sure to deallocate superfluous strings in the array! This may look superstitious but is a very valid possibility as LabVIEW may decide to reuse the array from a previous call to this function in a any Call Library Node instance! */ n = (**hdl)->cnt; for (pH = (**hdl)->elm + (n - 1); n > i; n--, pH--) { if (*pH) { DSDisposeHandle(*pH); *pH = NULL; } } } } IIR_USB_RELAY_DEVICE_INFO_T EXPORT_API * iir_usb_relay_device_enumerate(void) { //return (IIR_USB_RELAY_DEVICE_INFO_T*)usb_relay_device_enumerate(); IIR_USB_RELAY_DEVICE_INFO_T* ptr = NULL; IIR_USB_RELAY_DEVICE_INFO_T* deviceInfo = NULL; IIR_USB_RELAY_DEVICE_INFO_T *prev = NULL; int len = 0; const char* sn[] = { "abcd", "efgh", "ijkl", NULL }; IIR_USB_RELAY_DEVICE_TYPE deviceType[] = { IIR_USB_RELAY_DEVICE_ONE_CHANNEL, IIR_USB_RELAY_DEVICE_TWO_CHANNEL, IIR_USB_RELAY_DEVICE_FOUR_CHANNEL }; for (int j = 0;sn[j];j++) { IIR_USB_RELAY_DEVICE_INFO_T* info = (IIR_USB_RELAY_DEVICE_INFO_T*)malloc(sizeof(IIR_USB_RELAY_DEVICE_INFO_T)); len = (int)strlen(sn[j]) + 1; info->serial_number = (unsigned char*)malloc(len); info->type = deviceType[j]; memcpy(info->serial_number, sn[j], len); info->next = NULL; if (!deviceInfo) { deviceInfo = info; } else { prev->next = info; } prev = info; } return deviceInfo; } int EXPORT_API iir_get_device_info(DevInfoArrayHdl *arr) { MgErr err = mgNoErr; IIR_USB_RELAY_DEVICE_INFO_T* ptr = NULL, *prev = NULL; DevInfoHandle* pDevInfo = NULL; IIR_USB_RELAY_DEVICE_INFO_T* deviceInfo = (IIR_USB_RELAY_DEVICE_INFO_T*)iir_usb_relay_device_enumerate(); int i = 0, n = (*arr) ? (**arr)->cnt : 0; for (ptr = deviceInfo; ptr; ptr = ptr->next, i++) { err = resize_array_handle_if_required(arr, i, &n); if (err) break; pDevInfo = (**arr)->elm + i; err = add_dev_info_to_labview_array(pDevInfo,ptr); if(err) break; } iir_usb_relay_device_free_enumerate(deviceInfo); free_unused_array_memory(arr); DevInfoHandle* hdl2; char buf[1024]; (**arr)->cnt = i; for (hdl2 = (**arr)->elm,i=0; i<(**arr)->cnt ;i++,hdl2++) { sprintf_s(buf, 1024, "%s: %d", (*(**hdl2)->elm)->str, (**hdl2)->iir_usb_relay_device_type); } return err; }
  3. Very helpful, now it's working. Another big problem I realized is that I had the CLFN set to WINAPI, not C, calling convention 🤦‍♂️
  4. Ah, that makes sense. Needs to be a pointer to the handle... I am not currently crashing mid-function anymore, but i do crash when it reaches the end. I have attached the code as it stands now. I'm creating a linked list myself since I don't have access to the hardware. I am seeing something strange in the visual studio debugger. Notice the value of the str variable in hdl has some junk after "efgh." It's making me think something in the resize and MoveBlock aren't quite right, but I can't figure out what. int EXPORT_API iir_get_serial_numbers(LStrArrayHandle *arr) { MgErr err = mgNoErr; IIR_USB_RELAY_DEVICE_INFO_T* ptr = NULL; IIR_USB_RELAY_DEVICE_INFO_T* deviceInfo = NULL; IIR_USB_RELAY_DEVICE_INFO_T *prev = NULL; LStrHandle *pH = NULL; const char* sn[] = { "abcd", "efgh", "ijkl" }; int len, i = 0, n = (*arr) ? (**arr)->cnt : 0; for (int j = 0;j < 3;++j) { IIR_USB_RELAY_DEVICE_INFO_T* info = (IIR_USB_RELAY_DEVICE_INFO_T*)malloc(sizeof(IIR_USB_RELAY_DEVICE_INFO_T)); info->serial_number = (unsigned char*)malloc(5*sizeof(unsigned char*)); strcpy_s(info->serial_number,5*sizeof(unsigned char*),sn[j]); info->next = NULL; if (!deviceInfo) { deviceInfo = info; } else { prev->next = info; } prev = info; } //IIR_USB_RELAY_DEVICE_INFO_T* deviceInfo = (IIR_USB_RELAY_DEVICE_INFO_T*)iir_usb_relay_device_enumerate(); /* This only works reliably if there is guaranteed that the deviceInfo linked list won't change in the background while we are in this function! */ for (ptr = deviceInfo; ptr; ptr = ptr->next, i++) { /* Resize the array handle only in power of 2 intervals to reduce the potential overhead for resizing and reallocating the array buffer every time! */ if (i >= n) { if (n) n = n << 1; else n = 8; err = NumericArrayResize(uPtr, 1, (UHandle*)arr, n); if (err) break; } len = strlen(ptr->serial_number); pH = (**arr)->elm + i; err = NumericArrayResize(uB, 1, (UHandle*)pH, len); if (!err) { MoveBlock(ptr->serial_number, LStrBuf(**pH), len); LStrLen(**pH) = len; } else break; } if (deviceInfo) { IIR_USB_RELAY_DEVICE_INFO_T *t; while (deviceInfo != NULL) { t = deviceInfo; deviceInfo = deviceInfo->next; free(t); } } /* If we did not find any device AND the incoming array was empty it may be NULL as this is the canonical empty array value in LabVIEW. So check that we have not such a canonical empty array before trying to do anything with it! It is valid to return a valid array handle with the count value set to 0 to indicate an empty array!*/ if (*arr) { /* If the incoming array was bigger than the new one, make sure to deallocate superfluous strings in the array! This may look superstitious but is a very valid possibility as LabVIEW may decide to reuse the array from a previous call to this function in a any Call Library Node instance! */ n = (**arr)->cnt; for (pH = (**arr)->elm + (n - 1); n > i; n--, pH--) { if (*pH) { DSDisposeHandle(*pH); *pH = NULL; } } (**arr)->cnt = i + 1; char buf[1024]; LStrHandle hdl; for (int k = 0; k < ((**arr)->cnt);++k) { hdl = (**arr)->elm[k]; } } //iir_usb_relay_device_free_enumerate(deviceInfo); return err; }
  5. Actually, I did this yesterday to try and figure out the datatypes 😂. I thought some other magic was happening but I suppose not. I guess my question then becomes, why do we choose the particular sizes for uPtr, and how does that translate to our struct size? I suppose I'm just not understanding the relation between uPtr, the cast of arr to UHandle, and 'n' in terms of what is being done inside that resize function. Anyways, unfortunately I keep crashing here: err = NumericArrayResize(uB, 1, (UHandle*)pH, len); I did get a compiler error here: pH = (**strArr)->elm + i; I assume it should be (*strArr) -> elm + i; but I'm not certain. I did change it to this however to get it to compile I have confirmed my variable 'len' = 4 for my serial_number 'abcd' Other than that, I'm not sure what the problem may be. Do we need to include the \0 in the len? I believe doing (UHandle*)(&pH) gets us closer?
  6. Looking at the API, I believe this is the case. @Rolf Kalbermatter would you mind explaining what is happening with the first NumericArrayResize? The one for the string buffer is pretty self explanatory, but I don't quite understand how the first one that resizes the LStrArrayHandle works. How does the function know how to resize memory for a data structure that we have defined ourselves?
  7. Very helpful, both of you. Thanks! Rolf, you mention this will only work if the linked list doesn't change in the background, which of course makes sense. Theoretically it won't change, but it is coming from another API, so I assume there is no way to handle keeping this safe? Even if I were to loop through the linked list immediately and make copies of the items and/or serial numbers, during my looping there is still the risk it could change, correct?
  8. Thanks very much. This is really helpful and makes a lot of sense. My only follow up question is, is it necessary to size the LStrHandle to fit the string and use strcpy? Or can I just assign h -> str to point at the address of s_deviceInfo -> serial_number?
  9. I have a DLL for a device which holds a linked list of structs. Obviously I cannot pass this to LabVIEW, but all I really need are the serial numbers from within the struct. So my idea is to write a wrapper to map the serial numbers to a string array and return them from a dll. Due to my lack of experience with C and lack of examples with the LabVIEW memory manager dll, I'm having quite a hard time. Because the linked list can be of a dynamic length, I am not sure how to handle the memory allocation. This is my attempt so far. My questions are 1) How do I properly size DSNewHandle to fit the serial number, or is there no need to since it's a pointer which will just be set equal to the address of s_deviceInfo -> serialNumber 2) how do I assign each new 'h' to 'arr' 3) since the number of devices/serial numbers can vary, can I still manage this on the LabVIEW side by passing in an empty array constant as 'arr' to the dll? Or do I need to modify how I'm doing things? typedef struct { int32 len; LStrHandle str[]; } **LStrArrayHandle; int iir_get_serial_numbers(LStrArrayHandle arr) { int i = 0; while(s_deviceInfo -> next) { LStrHandle h = (LStrHandle)DSNewHandle(sizeof(LStrHandle)); (*h) -> cnt = strlen(s_deviceInfo -> serial_number); (*h) -> str = s_deviceInfo -> serial_number; } return 0; }
  10. Nope, we have explicit, static registrations for controls in our subpanels. Nothing dynamic or using references. We did potentially trace things back to our error logger. We have a subVI that just throws errors in a queue and then a process that flushes the queue every few seconds and logs them. Seems when we aren't logging errors the problem goes away, so I'm not really sure what's going on. Possibly something there blocking our UI thread somehow and events getting missed but the flush and write happens within 100 ms so it definitely still seems a bit strange. Right now it seems the problem may not be happening in the executable and FWIW we are also updating a good amount of controls by reference with defer panel updates set to true. But the VI Analyzer shows this only taking ~50 ms so I'm not convinced that's the issue either.
  11. I have an application that is using subpanels, and I am having some issues where they seem to be missing button clicks. This is sporadic and not isolated to one specific button but seems to happen throughout the application on various screens. I look in the event inspector window and the value change never fires when these events are missed, so it's as if whatever is managing these events is missing it completely. After two or three clicks it will take. I know it's tough to help without a reproducible case, so I am mostly posting this to see if others have run into this behavior because I have been spinning my wheels.
  12. Thanks very much everyone. And I really appreciate the UML -- very helpful
  13. Sorry for the ambiguous title...hard to convey the problem without description. Right now I have an application that takes various measurements, but for now I'm going to focus on current. The issue is that there are many devices that can take current measurements, which our customer will swap out. But they don't necessarily have a parent/child relationship. Everything I think of screams to me "This would be easy with interfaces" but I'm really hoping there is some solution with composition. The application has a lot of situations where there is a power supply, but they may use a DVM to take the current measurement because they get better resolution on their results. There are other times they just use the current measurement the power supply reports. So, I thought about having a current measurement class that is composed of a "Source" object and a "Measurement" object. In some cases the Source and Measurement objects would both be a power supply. In other cases one may be a power supply and the other a DVM. But, I also want all my power supplies to inherit from BasePowerSupply and the DVM to inherit from BaseDVM. But if I want to do this and have either a power supply or a DVM be the measurement class, they both have to inherit from the same base class with a MeasureCurrent must-override method. However, as soon as I do this, if I want to create a a Source class, the BasePowerSupply can't inherit from it too. To me this just screams having an ICurrentMeasurable interface and an ISourceable interface that classes can implement. But alas, I cannot. So, any suggestions are appreciated. It just seems to me the more complex instruments get the more I really want interfaces to keep overlapping functionality required while avoiding coupling of unrelated devices through inheritance.
  14. For some reason this isn't working for me on Windows 10. Any thoughts? I've installed the latest version and already had the 2016 runtime installed.
  15. SmithD's response seems to be the general consensus I think. Mark, this is a good quote I'm stealing "if more than one class uses a typedef than is belongs to neither " Interesting about the translation classes. Translating the types was actually something I considered, but then I ruled it out because I thought I'd end up with too many types that were essentially duplicates of each other. I'll take a look at his presentation if I can dig it up. I started thinking about other languages such as C# and how they would handle this. I realized most methods would return classes or interfaces, not structures. And I started thinking about why that would be decoupled, and it's because the classes being returned are not owned by any other class. So this gave me my answer. Make sure the typedef isn't owned by any other class, and it effectively just becomes a POCO.
  16. I currently have a project that I am refactoring. There is a lot of coupling that is not sitting well with me due to typedefs belonging to a class, then getting bundled into another class which is then fired off as event data. Effectively, I have class A with a public typedef, then class B contains ClassA.typedef and then class B gets fired off in an event to class C to be handled. Class C now has a dependency on class A which is causing a lot of coupling I don't want. For my real world example I query a bunch of data from our MES, which results in a bunch of typedef controls on the connector panes of those VIs. Those typedefs belong to the MES class. I then want to bundle all that data into a TestConfig class and send that via an event to our Tester class. But, now our tester has a dependency on the MES. I see a few ways to handle this. First is move the typedefs currently in the MES class, to the TestConfig class. The MES VIs will now have the typedefs from the TestConfig class on their connector panes, but at least the dependency is the correct "direction." Or, I can move the typedefs out of classes all together, but then I am not sure the best way to organize them. Looking for how others have handled these sorts of dependencies.
  17. For completeness, this is the c# code where I'm now seeing matching (slow) timing numbers. namespace TestAdodbOpenTime { class Program { static void Main(string[] args) { Stopwatch sw = new Stopwatch(); for (int i = 0; i < 30; i++) { ADODB.Connection cn = new ADODB.Connection(); int count = Environment.TickCount; cn.Open("Provider=OraOLEDB.Oracle;Data Source=DATASOURCE;Extended Properties=PLSQLRSet=1;Pooling=true;", "UID", "PWD", -1); sw.Stop(); cn.Close(); Marshal.ReleaseComObject(cn); int elapsedTime = Environment.TickCount - count; Debug.WriteLine("RunTime " + elapsedTime); } } } } Output: RunTime 218 RunTime 62 RunTime 47 RunTime 31 RunTime 63 ...
  18. EDIT: You might be spot on smithd. I added Marshal.ReleaseComObject(cn) in my for loop and the times match almost perfectly to the LabVIEW ActiveX implementation. Just confused if that is being called under the hood of the open somehow, how the close connection would work. That reference would then be dead. That's one thing that makes me thing this may be a Red Herring. That's definitely a good thought that didn't cross my mind. I changed the LabVIEW code to leave the connections open but still no luck.
  19. I think I have found a fundamental issue with the DB Toolkit Open connection. It seems to not correctly use connection pooling. The reason I believe it's an issue with LabVIEW and ADODB ActiveX specifically is because the problem does not manifest itself using the ADODB driver in C#. This is better shown with examples. All I am doing in these examples is opening and closing connections and benchmarking the connection open time. Adodb and Oracle driver in LabVIEW. ADODB in C# namespace TestAdodbOpenTime { class Program { static void Main(string[] args) { Stopwatch sw = new Stopwatch(); for (int i = 0; i < 30; i++) { ADODB.Connection cn = new ADODB.Connection(); int count = Environment.TickCount; cn.Open("Provider=OraOLEDB.Oracle;Data Source=FASTBAW;Extended Properties=PLSQLRSet=1;Pooling=true;", "USERID", "PASSWORD", -1); sw.Stop(); cn.Close(); int elapsedTime = Environment.TickCount - count; Debug.WriteLine("RunTime " + elapsedTime); } } } } Output: RunTime 203 RunTime 0 RunTime 0 RunTime 0 RunTime 0 RunTime 0 RunTime 0 RunTime 0 RunTime 0 Notice the time nicely aligns between the LabVIEW code leveraging the .NET driver and the C# code using ADODB. The first connection takes a bit to open then the rest the connection pooling takes over nicely and the connect time is 0. Now cue the LabVIEW ActiveX implementation and every open connection time is pretty crummy and very sporadic. One thing I happened to find out by accident when troubleshooting was if I add a property node on the block diagram where I open a connection, and if I don't close the reference, my subsequent connect times are WAY faster (between 1 and 3 ms). That is what leads me to believe this may be a bug in whatever LabVIEW does to interface with ActiveX. Has anyone seen issues like this before or have any idea of where I can look to help me avoid wrapping up the driver myself?
  20. This may be the difference. I am currently using: vi.lib\addons\database\NI_Database_API.lvlib. This particular project is LV2013, ideally soon to be rolled forward but for now we're stuck with that version.
  21. I am running calls to a various stored procedures in parallel, each with their own connection refnums. A few of these calls can take a while to execute from time to time. In critical parts of my application I would like the Cmd Execute.vi to be reentrant. Generally I handle this by making a copy of the NI library and namespacing my own version. I can then make a reentrant copy of the VI I need and save it in my own library, then commit it in version control so everyone working on the project has it. But the library is password protected so even a copy of it keeps it locked. I can't do a save as on the VIs that I need and make a reentrant copy, nor can I add any new VIs to the library. Does anyone have any suggestions? I have resorted to taking NIs library, including it inside my own library, then basically rewriting the VIs I need by copying the contents from the block diagram of the VI I want to "save as" and pasting them in another VI.
  22. Well...didn't fix it per se. But, we did a build WITHOUT normalizing the string array (i.e. no code changes) and it's using drastically less memory in the EXE than the dev environment....We're talking like 600 mb of memory usage instead of 2.5 gb. My guess now is having debugging on in some of these VIs is causing issues in the dev environment. Probably copies everywhere. Either way, normalizing things was a much more memory efficient way of doing this and is a needed improvement. Rather than have 24 classes each with 80k strings, many of which are duplicates, we have 24 classes each with about 2k strings, and we have 80k integers that point to an index from that string array. As much as I'd love to dig into the LabVIEW memory manager to truly understand what's happening in the dev environment (not), I am just going to put this in the "no longer a problem" column and move on.
  23. Alright, I rolled back to a "bad version", I grabbed this snippet off the idea exchange and I'm going to run it on all my classes. I'll see what happens...
  24. I normalized my data but was still seeing awful use of memory. Upwards of 3 gb, and I would randomly get a copy that would give me an out of memory error. So, i went into my project settings, unmarked everything that had compiled code separated. Cleared compiled object cache. Did a save all. My memory usage has dropped from 3gb with tons of seemingly unnecessary copies to 1 gb just by doing this. I have on and off seen some very bizarre issues with classes and separate source from compiled, and even with that setting I still get lots of dirty dots anyways which isn't buying me much. I think I'll be staying away from it in the future.
  25. I have an array of classes, let's call the object TestPass, of size 1 (but it is an array because it can scale out to multiple test passes). In this class, there is one other nested class which is not too complex, then various numeric and string fields to hold some private data. There is also an array of clusters. In this cluster there is a string, two XY pair clusters, and an integer. Not very confusing. This array of clusters gets fairly large, however, upwards of 80-100k elements. What I am finding is when I index the array of pass classes it is crazy slow. On the order of 30 ms. Doesn't seem like much, but we are indexing the array in our method to "Get Current Pass" which is used in various places throughout our code. This is adding potentially hours to our test time over the 80k devices we are testing. So, I started digging. When I flatten the class to a string and get the length, it's 3 mb. But, when I run the function with the profiler is is allocating close to 20 mb of memory! My gut feel was that the string is causing the issues. So I removed the string from the cluster and the index time went to 0 ms. Luckily we can normalize a bit and pull the strings out of the cluster since a lot of them are duplicates. But it makes our data model a bit uglier. Has anyone seen these kind of performance issues before? I saw them in 2013 and 2017.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.