Jump to content

John Lokanis

  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by John Lokanis

  1. I have a class hierarchy with a grandparent that implements a message handler. I have a child class that overrides the message hander to add additional parallel functionality but also calls the parent method to get the message processing functionality. I now want to make a grandchild that also overrides the message handler method to implement different parallel functionality but still needs to call the grandparent implementation to get the message processing functionality. (I am doing this because I want to inherit some methods from the child in the middle.) The problem is, the grandchild wants to call the child implementation when I use 'call parent'. But I don't want this middle implementation, I want the grandparent's version. I cannot simply place the grandparent's method because that dispatches back to the grandchild (making the method recursive). I think I solved this by using 'to more generic' before the call to the grandparent's method and 'to more specific' after the call. But in the grandparent's method, it calls message 'Do' methods that need the data on the wire to be from the grandchild. Will the 'to more generic' call strip that data from the wire or will the call to the 'Do' inside the grandparent pass the object as the grandchild so when I cast it to the grandchild, it will work? Hope that was not too convoluted. Any help is appreciated. -John
  2. Tried that. Cannot cast the webclient() to the webrequest() or the httpwebrequest(). I think it is because the webclient() is not a child but rather uses webrequest(). So, I need to create a webrequest() object, set the timeout and then tell webclient() to use my version. But I have no idea how to create a .net object that inherits from the base class and then instantiate the webclient using my new object.
  3. I am calling a web service using system.net.webclient(). So far it is working fine but now I want to set the timeout to something other than the default. Unfortunately, the timeout is not an exposed property for webclient(). From what I have read, webclient() is a child of the httpwebrequest() class, and that class has a timeout that can be set. But httpwebrequest in system ( has no public constructors. All the examples on the web show how to override the webrequest getter to set a different timeout. Unfortunately this exceed my .net skills. Does anyone know how to solve this in LabVIEW? Here is the C# example: http://stackoverflow.com/questions/1789627/how-to-change-the-timeout-on-a-net-webclient-object NOTE: I am trying to solve this problem in LabVIEW 2011 so am stuck with .NET 2.0 for the system assembly. thanks, -John
  4. Bummer. Thanks for the help anyways... Maybe I can show you my code in March at the summit.
  5. I have tried setting the panel activation in my actual application. I also set the FP to Frontmost. Then I set the key focus on the string to true. Not working. Something strange must be going on.
  6. Ok, tried this in my actual application. Didn't work. Might be because my application has this VI as a floating window. Or might be some other effect. I tried making the example VI floating as well and found the sometimes the key focus did not work. Seems to be flaky. Does it matter if I set this from a sub-vi by passing in the ref to the string element of the array? Also, does it matter if I initialize the array with empty strings after I set the key focus?
  7. Thanks Darren. Didn't know you have to get a property node for the string in the array. I figured that the Array Element.Key Focus would be the same thing. BTW: the panel activation is not needed, apparently. I got it to work without it. One issue: it seems to put the focus on the last element you clicked in before running the VI. So, if you click in element 3 while in edit mode, then end text entry without typing anything, when you run the VI, the third element will have the cursor in it. Is there any way to control which element in the array gets the key focus? And is any of this documented anywhere? (the bit about getting the ref to the element control)
  8. I am trying (and failing) to set the key focus to the first element of an array control. I want my dialog window to appear and have the cursor placed in the first element of a specific array control so the user can just start typing without first having to click on the element in the array. This seems like it would be a simple things to do, so I am hoping that I am just making some bonehead mistake. I have tried setting key focus, array element key focus, selection start and selection size. None have worked. Anyone know if this is possible? I have attached an example VI with my attempts. array key focus.vi thanks for any wisdom... -John
  9. Well, they are in separate panes but on the same front panel. And one is an Xctrl. It seems to only happen when leaving the Xctrl and entering the MCLB in the adjacent pane. I will have to build an example when I have some time. The app where this is happening is way to big to post. (Now I've thread-jacked my own thread!)
  10. Agreed, but when I tried it in practice, it seemed acceptable. Now if I could just get mouse leave events to consistently fire before mouse enter events for adjacent controls, I would be a happy camper...
  11. Thanks for the example. The mouse move to 'disarm' the sort between a mouse down and mouse up did the trick.
  12. Well if that is the best solution you could find, I doubt I will be able to do better. I actually thought about this but really hoped there was a more elegant way to do this. Thanks for confirming this solution. I'll give it a whirl.
  13. Just to be clear, I am using the "Mouse Down?" event to perform the sort. I do not have custom code to implement the resize. That is provided by the MCLB control. So, if I was to use the mouse move, how would I accomplish that? The act of clicking anywhere on the control triggers the "Mouse Down?" event and the sort runs if the mouse is in the header. I need to suppress this if I am doing a resize. If I use "Mouse Move" that will trigger every time the mouse changes position over the control regardless of clicking. How could I isolate that to suppress the sort while still allowing the click of the header to trigger the sort?
  14. I am stumped. I created a MCLB that allows you to sort the data based on the column the user clicks on. This is pretty simple. Just trap the mouse down event and check if it is in the header row. If so, find the column clicked and sort the data, then discard the event. But then I wanted to add the ability to resize the columns however this did not work because the event would get discarded after doing the sort. So I stopped discarding the events. But now I end up sorting the data every time I try to resize a column. So, I need some way to detect if I am hovering over a column separator and have the resize cursor displayed and then use this fact to suppress the sort on mouse down events. Any idea how to solve this? Is it even possible? thanks for any insights... -John
  15. This might be of interest to those of you who support applications on Windows 7 or who use Windows 7 VMs for development work. http://redmondmag.com/articles/2014/10/27/windows-7-sales-to-consumers.aspx
  16. Reentrancy is absolutely necessary for all VIs when you instantiate multiple instances of the same code base in memory. Otherwise, they would constantly be blocking each other to access the same VI. These instances must run completely asynchronously and simultaneously. Without 100% reentrancy this application would be impossible to design.
  17. I hope to keep the memory footprint down, but since the application is a test system that simultaneously tests 100's of DUTs in parallel (each DUT getting it's own instance of a test executive), the data consumption can add up. The current system uses ~9MB per DUT plus overhead of 66MB for the whole system. I suspect the new system will exceed this a bit. So, assuming 100MB of overhead and 10MB per DUT, that puts me at 5.1GB for 500 DUTs (that is my target maximum). So, it is possible that I could benefit from a larger memory space. Need to get the new system completed and do some testing to confirm this.
  18. True. But in my case I only plan to use a 64 bit OS. Either Win7, Windows Server 2012R2 or Win8.1, but all 64 bit. So, my 32 bit application will have the full 4GB to use. But I will still need to see if I can run it under stress and not exceed the 4GB limit.
  19. Thanks for the info. Sounds like there is no advantage of 64 bit beyond memory access. I will have to see how the application performs under stress to see if I am RAM limited.
  20. I currently develop my application in Windows 7 using 32bit LabVIEW 2014. The IT department wants me to deploy to VMs going forward. And they want the VM OS to be Windows Server 2012 R2 (64-Bit). Does anyone use the 64 bit version of LabVIEW? If so, what OS do you use? Is there any issues with developing in the 32 bit version of LabVIEW but compiling with the 64 bit version for releases? I want to stick with 32 bit for dev work because some things like the desktop trace toolkit, unit test framework and VI Analyzer are not available for the 64 bit version. My I/O is limited to NI-VISA for TCP/IP communication, PSP for talking to cRIO over Ethernet and .NET calls for database and XML communication. I do have some Fieldpoint hardware that I talk to via Datasocket but that could be moved to cRIO via PSP. From what I can tell, all of that should work with 64 bit LabVIEW. The application has 100's of parallel processes but does not collect large amounts of data. Just lots of small chunks of data. Would it benefit from a 64 bit environment? Also, the application is broken into two parts, a client and a server. I use VI server to communicate between the two across the network. If the client is a 32 bit LabVIEW application, can it use VI Server to talk to a different 64 bit LabVIEW application? Thanks for any tips or feedback, -John
  21. Just spent 2 hours trying to explain the thread management behavior of LabVIEW to a bunch of SQL developers. I need a drink...

  22. Does anyone know if it is ok to set the execution system property on a dynamically called VI right before you call the run method? Also, will dynamically called VIs inherit the execution system of their caller if it is not changed in the property node after opening the reference to the dynamic VI?
  23. Yes, I typically have 40-100 parallel sub-systems, each with several threads (and at least one dedicated to .net calls to a DB) running at the same time. Normally a call to the DB via .net executes in milliseconds so there is no issue. But lately, the DB has been having issues of slow response and deadlocking, which I suspect is causing the .net calls to hang for a long time and starving my LV code of clock cycles. And yes, all the timer code is LV. Actually, it is a pure LV system outside of the .net calls for DB access and some occasional XML reading. So, bumping the thread count seems like a good bandaid for the short term without having to recompile the exe. Changing the execution system from same as caller to something else for the .net code might also help? At some point I thought I remember hearing that LV will use the other execution systems automatically if one is overloaded, but perhaps I am remembering that incorrectly. Anyone know the ini strings to adjust the execution system threads? Oh, and does it matter how many cores the machine has or will the OS manage the threads across the cores on it's own? thanks, -John
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.