Jump to content

ShaunR

Members
  • Posts

    4,856
  • Joined

  • Days Won

    293

Posts posted by ShaunR

  1. I've done splash screens before, but they've not really served a purpose other than branding so their implementation really didn't matter from a performance perspective.

    Now I have an application which seems to take quite a bit of long time to load up, to the point where users regularly wonder, "Hmm, did I actually double click that? Better try again." More than once I've been right next to the person and told them, "No just be patient, it will show up." only to watch them try again and again to click that darned icon. Well at least I haven't set allowmultipleinstances=true in the ini file. Yet.

    I know there will be a constant time as the LabVIEW RTE spins up which is independent of the actual executable I create in LabVIEW. But does the size of my executable also affect load times? I'm under the impression that the whole thing must load, so if it's larger a splash screen can't be shown either way until the entire application has been loaded into memory. I might be wrong here. My current application is at 44 MB, this doesn't seem too large to me...

    Has anyone played around with dynamically loading their core application logic from outside of the executable to see if that reduced time to display a splash screen? I figure the best way to do this is have the bulk of my code in a LabVIEW built DLL, then have a shell of a LabVIEW exe display the splash screen and proceed to take its time loading the DLL, after which the splash screen hides itself? The DLL will not be statically linked, a path will be built to it at run time. Or is this road fraught with peril? I've never actually built a DLL in LabVIEW, this might be interesting.

    From my experience it's not so much the size of the file. It's the size of the hierarchy. If you have 1 VI that is 44MB it will load a lot faster than 10,000 VIs @44MB. Although I would want to debug the former tongue.gif. A splash screen means that you only have a very small hierarchy to load before you can display something. It also gives you the opportunity to "incrementally" load you application For example, you may have a "hardware check". In running that, you have loaded quite a few VIs that you probably use in the main app (and done something useful) without having to wait for the whole app to load.

  2. No, its difficult for me make it in expexted way. You can see in my code, first event case, I am writing all the addresses to serial device from 0 to 127 because it represents 7 bit address and the LSB bit should be read bit or write bit whicjh i did not sending now.

    Like this?

  3. I would think twice before generalizing like that. User events are at the top of my list and many others I've talked to, when it comes to inter-process communication.

    Probably is if you are talking to architects and experienced clds. But "most" labview programmers aren't and the limit of IPC conversations tends to go as far as a producer consumer loop with queues and that's about it.. I've yet to see a student, electrical or electronics engineer (LabVIEW is still seen as a secondary skill in many companies eyes) talk about IPC and messaging systems. And I've worked, and interviewed many.

    Also, using the timeout frame is a common use-case for reading other data that needs polling or other updates between events.

    And those are exactly the apps that can fall fowl of this "feature".

    In fact, at the CLA summit, there were many requests by other CLAs to extend the capabilities of the Event Structure even further, since handling user events is simply not enough.

    I've commented many times about how I feel that events have been neglected. So you are preaching to he the converted here.

    Even though, at the end of the day, I'd really like NI to find a solution to this problem which allows me to register for events all willy nilly-like and only selectively create cases to handle them - my gut tells me that, If I I don't need to react on an event then I shouldn't really be registering for it in the first place, should I. For now, I would chalk it up to: "Hey, I just learned something cool and I really should spread the word about this 'best practice' to my colleagues and warn them about it." Education is important.

    Well. Actually, in the strictest sense, Events shouldn't have a timeout at all. An event is either signalled or is not not. No other language I know of has event time-outs and if a programmer wants to time-out if no event is received within a time-frame, then he has to create a timer that gets reset when an event fires. This is probably why the event case is as it is. The windows forms message callback (Wndproc) just has a timer that gets kicked on entry.

    As for not registering unless you need to react. that is only applicable at design time. What if you have configurable alarms? You will still need all the cases on the off-chance that a user will want to register it and receive feedback. thats one of the more useful (but rarely seen in LabVIEW) uses for events.

    Unfortunately, I don't think it is "cool". At best it's unexpected behaviour and explains (IMO) a lot about problems people have seen with xControls.

    • Like 1
  4. Oh, I'm not saying that it's counterintuative - it totally is, until you stop and thinking about what's going on. That can be said for several things in LabVIEW (remember the first time you branched a by-reference wire and couldn't fathom why the data "on" the other wire was changing?)

    Maybe there should be an option to unset this should be available (like an "allow unhandled events to be registered" checkbox in the event dialog if the dynamic event handles are shown). That said, I'm sure AQ will agree that it's a little more complicated than just adding a chekcbox :)

    I would actually categorise it as "unexpected behaviour". If it is a bug or not then depends on whether it was specifically designed to not timeout under certain circumstances. My guess is it wasn't considered and "most" developers would want a time-out to, well, time-out rolleyes.gif

    I guess the good news from this is (for me) - if its taken this long for this issue to be identified (LabVIEW 6.1 to present) then it must not have cropped up much (if not at all) before (granted, we could be using the event structure differently now).

    I think it's probably because most people only use the event case for the UI (and generally wire -1 to it). And only a few are brave enough to base a whole inter-process messaging system purely on events.So if anyone is going to find it....it's you biggrin.gif

  5. I'm in on the Justin and JG bandwagon.thumbup1.gif (not very crowded yet :) )

    Doesn't anyone else think that a "timeout" that can never time out is in the least bit strange in concept?blink.gif

    Sure you can find a use case for it. But I bet there is far more instances of head-scratching and cries of WTF. On the shoot yourself in the foot scale of 1-10. It's a bit of an 11. It's also one of those "issues" that you come across from time to time where the whole app just doesn't work, but when you try to debug. Everything works fine in isolation (or if you slow it down).

    • Like 1
  6. Well, the DLL has to be the correct one for the actual LabVIEW platform of course. But since OpenG ZLIB is distributed as OpenG package, the package installer can make sure that the correct DLL is installed depending on the current LabVIEW version and platform. But what I want to avoid is any platform specific settings in the VI interfaces to the DLL. That would make distribution and maintenance of the library rather more complicated. I don't have a seperate wrapper DLL but have combined all the code (zlib, minizip, and wrapper code) into one library. This library is compiled into whatever platform shared library format is required including Win32, (Win64 hopefully soon), Mac OSX, Linux, and VxWorks 6.1 and 6.3. All of them are included in the ogp, with the MacOS X shared library being zipped up first to avoid loosing the resource fork of the files, and then the OGPM or the VIPM takes care to install the one that is required for the current LabVIEW version the package is installed into (and unzips the library through a custom post install step in the package for the MacOS X plaform). All the VIs and other help files are supposed to be platform independent and stay that way if possible at all. And the wrapper code is where I have spend some time in to make that independence happen.

    And the delivery takes a little longer since I went for a Dell Latitude machine. Also there are company internal delivery paths that add some time to this too :wacko:.

    IC.

    I thought I was missing a trick sad.gif That's a lot of work - Kudos for even considering it.

    I took the decision a while ago to cater for cross platform in Labview rather than support my own wrappers or modifying pre-tested API source. I figured that platforms change rarely, but dlls change often. If I just interface directly to the DLL from LabVIEW I could also take advantage of tested, pre-compiled binaries from the developers and just download and overwrite. But I am not as comfortable in C as you are.worshippy.gif I also don't have to contend with backward compatibility to LabVIEW versions from 10 aeons ago (when there were no things like "pointer-sized" int). But presumably that could be worked around with the package builder too.

    I think there are a lot of people lurking on this thread waiting to pounce once you announce a 64 bit version.thumbup1.gif

  7. The zlib library is most likely not a problem. I have used the latest source code too. It's either an oversight in the Call Library Node configuration since I attempted to make the wrapper functions so that they will work in 32 bit and 64 bit without modifications to the VIs or something in the wrapper code that goes wrong. I'll take a look at it when I have installed the new machine.

    As far as I'm aware, lv can only load a dll of the correct bitness so I envisaged at least 2 zwrapper dlls (one x32 & one x64 if you stayed with an intermediary) even if you managed to thunk down to a single 32 bit zlib (expecting 4 dlls in total though). There's obviously a trick I'm missing and can't wait to see the solution. PC on next day delivery? biggrin.gif

  8. Well I'm soon going to get a new machine and it will most likely come with Windows 7 installed (not really happy about this but I probably have to bite the bullet at some point). One advantage will be that it is going to be 64bit and therefore I can do some debugging of my own. My first dry exercise with just compiling a 64 bit DLL, did crash on Jim's computer, so there must be something still wrong with the DLL.

    Is it the wrapper dll or the zlib dll thats crashing?.

    I've have a zlib dll that I'm using successfully in LVx64 if that's what is causing you problems. I'm not using it for zip files as I'm currently using the LV shipped ones (although it is compiled with the minizip 64 bit functons - not sure if you use them or not ). It passes all the zlib and minizip tests and labview isn't complaining (or dying) when compressing buffers.

    Anyhoo. I'm attaching it in the hope it might save you a bit of time.

  9. Is it possible to benchmark the transmit times of local variables?

    I've run some simple benchmark stuff before, but never something that would figure out how long it took to get a piece of data through a local variable.

    I've attached a simple program that sends some info through a local variable, and there's obviously some lag that's visable, well, at least visable on the system I'm running. I know that lag depends on the system the program is running on so I'd really like to find a way to test it on the various PC's that we use here.

    Any ideas?

    -Ian

    I don't think your program is doing what you think it is doing. Your top loop is updating the indicators every 1 second. And your bottom loop will show you the new value about 0.25s after that.

    The "lag" is because you are only updating the command2 indicators every 250 ms. Set the 250 ms wait to zero and they will change instantly. You would be hard-pushed to measure the read/write time of a local variable (which would be a few cycles of the computers clock).

  10. Would it be useful if I generated a LabVIEW-only AES128/256 solution? I am thinking of making it one of the 'student challenges' next quarter; see what they come up with.

    -Justin

    I think that'd be great. thumbup1.gif There's very little in native labview for encryption (for free wink.gif ).

    It must be nice to have so much free resource to work on your little nuggets yes.gif

  11. I decided to cut my losses and live with imperfect counting for now. If I miss one or two touchdown counts out of a thousand I don't really care. It turned out to be somewhat painful to replace the contents of a file. I had to find the length of the new string, manually move the EOF byte to the new string length, then write the new data from the start of the file.

    The only real (practical) way out of this scenario is to use a database which handles locking,delayed writes and concurrency for you. Thats why websites run off databases and not file systems.

  12. Maybe Win7 eats up tons of that lower memory, leaving less for 32-bit apps.

    Bingo!

    It's not really an x32 on x64 problem. It's an x32 in windows7 problem. there is only a finite number of addresses that can be gotten with a 32 bit number. Problem is, they are all towards the top end of the range since Win7 hogs most of the bottom.

    Without switching to X64. you will still be limited in the address space so 20 is probably the best you can hope for. The trick is, how to offset those addresses against the Windows7 OS to claw back some memory for your app.

    Try having a read of this. It works on Windows 7 (I believe and have been told, but never tried).

    (never upgrade until the first SP).

    where have I heard that before rolleyes.gif

  13. I did a little research and it appears that this is not as simple as I thought. Serializing access to a shared resource seems to be a potential trouble spot for any kind of software. The only way to guarantee that a race condition does not occur is for process A to lock the file, and for process B to wait for the lock to be released. By accident I have created a poor-man's file lock by deleting the original file after reading, although there is still a narrow window where a race condition can still occur. Does anyone know if there is a "file lock" vi and a "wait for lock" vi? Can I do this from the command line in Unix using the system exec vi?

    Ok, here is an idea I got from the internet:

    1. Check for the existence of sharedfile.lock
    2. If sharedfile.lock exists, wait 10ms, go back to step 1
    3. Create sharedfile.lock
    4. Open sharedfile.txt
    5. Overwrite sharedfile.txt with new value
    6. Delete sharedfile.lock

    Does this sound like it would work?

    Why not just use the "Deny Access" vi (should be in LV7.0 - under file IO>>advanced file functions).

  14. ShaunR

    It's interesting I didn't know that . Thanks. I Use Firefox and i did what you said but it just pasted it as an image and not converted to code. How can we differentiate a simple png file from a snippet?

    If you are having difficulty with firefox. Open the image in a new tab before dragging to the desktop. It can be a bit of a pain in firefox. (it's even worse in chrome)

    You can tell it's a snippet because it has the hand,arrow and lv icon in the top left of the diagram image. It also has the labview version in the top right.

  15. Thanks Neville But it's a png file.

    If you are using exploere, just drag the image from explorer to an empty diagram.

    If you are using firefox, drag it to your desktop then drag it from there to an empty diagram.

    VI snippits are png image files with the actual code embedded in the image. When you drag the png image to a diagram, labview will re-create the code.

  16. Can't help with the problem. But maybe can explain how it is different

    It is a "plug-in" control and resides in the "\resource\PlugInControls" directory. As far as I'm aware, it is an undocumented interface allowing controls to be created from external DLLs and resources. I have a feeling it was the way NI was going for custom controls before they decided on Xcontrols.

  17. Well. My 2 cents.

    In practical terms; to transmit data over TCPIP you only need to know the length (ignore transport layers-at the application layer). How you bundle that data into the payload is irrelevant as long as you know how many bytes you are expecting. So simplest and most effective is a n-bit length and then your payload. You can use delimiters, but then you cannot send binary data without escaping it all and/or you have to put a lot more logic into your software to keep reading and testing data to find the end.

    That ticks all your boxes, for sending and receiving. It's the payload, however, that you need to decide how to package to make it "future" proof. Abstract the interface from the data and treat them separately. Once you have decided on how you are going to package it, it will either be a simple case of adding a length byte and transmitting, or the packaging will dictate that you use delimiters and (probably) some bloaty engine to parse it.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.