Jump to content

mje

Members
  • Posts

    1,068
  • Joined

  • Last visited

  • Days Won

    48

Everything posted by mje

  1. Hah, yeah, this is a mature product. I had considered yanking the express VI call in favor of the direct Win API calls when we iterated the development cycle to 2013 but have yet to see the issue since we've moved on. Speaking of which we are days from freezing the code base on this cycle. Timely resurface of this bug: early enough to make me nervous, but late enough that I can't do anything about it. *Sigh*...I regret not moving to the API since this is a Windows-locked application. At least the bug surfaced on the legacy 2011 build, I still have no indicators this is an issue with 2013.
  2. The application in question was compiled in LabVIEW 2011 64-bit (Windows). I can't recall what level of patches or service packs were used to create it.
  3. Resurrecting this thread, the problem re-emerged on the same application today. I would start it up, execute my File > Open command and the dialog would appear half-drawn and deadlocked in a non-default position and size. Repeat this a few times to the same result, the dialog always appears in the same location only partially rendered. Fire up my application again, then start up procmon.exe with appropriate filters in place to catch only stuff from the target PID and...the problem goes away. The dialog appears in the default centered position on the active monitor and is responsive. From this point on the application works normally with or without procmon.exe running. Maybe there's some underlying race condition in opening that dialog and setting position/style/etc that deadlocks, but when procmon hooks into things timing may be nudged just enough to allow dialog initialization to execute to completion. Maybe it's a security thing and with the procmon hooks in place default initialization data is used. I really don't know. Grasping at straws here. I wish I knew if it was LabVIEW or Windows that is responsible for serializing that data. Let's be clear, data most definitely is being serialized: window size, position, and style are consistent each time the dialog is called. This information survives application restarts, system restarts. It persists any scope over which I have control. This data must exist somewhere and is causing me grief. I'd like to have way to clear it in situations like this. Relying on procmon.exe to "nudge" things enough to sort things out is most definitely not a solution that is acceptable, even if it is what works (for an unscientific sample size of two). Well, at least I didn't lose a week this time around.
  4. Interesting problem, AlexA. I've only ever done that with individual events, that is ones which are not clustered. I don't know if you can pick off an individual cluster element to register, I've definitely never thought to try.
  5. From the context of your question I got the impression you were asking about user events and dynamic registration. To be clear also realize that the lossless generalization falls apart for statically registered events (the ones configured directly in an event structure linked to a control). You can configure those to act via a FIFO buffer so they can be lossy.
  6. To be pedantic, there is nothing lossless or lossy about events. Events by themselves are only a means of tracking subscribers. It is the registration refnum that wraps the event queue, and yes is lossless.
  7. I'm no doubt a minority given the current list of replies, but I'd say warnings or feedback isn't the way to go. I'd prefer effort go to building safe constructs that leverage the existing strengths of the language. I mean a DVR is almost there, what if you could have implicit named globals that are selectable by name at the boundary of an IPE structure? Instant "safe" global, the user doesn't need to know anything about the synchronization going on behind the scenes. I realize of course this is no safer than a normal DVR but you'd be solving a lot of synchronization issues surrounding globals right out of the gate using pieces that are already there... Even better if these named variables need not be global, but could be local or arbitrarily scoped data. Basically I don't believe it's the IDE's job to teach the programmer, which is what a warning system is if it's not completely ignored. The IDE's job is to give me rock solid tools I can use easily.
  8. I'm with Shaun here: warnings in their current state are ineffective due to their overwhelming nature. I can't remember the last time I paid attention to the warning list. Perhaps if we were given a way of disabling classes of warnings such that the list became manageable, but otherwise I doubt most would ever see a race condition warning.
  9. As far as breaking VIs when such a condition is detected, I need to give it some thought. I know I've been guilty of leaving one particular race in production code simply because if the race was ever to be "lost" it really wouldn't matter but fixing the race would be a very risky and time consuming bit of work. The corollary to that is if the outcome doesn't matter is it really a race condition or more an undefined order of execution? I'd tend to argue the later, but could still see some small bit of misguided merit to the race side of that coin. Regardless of what action is taken, I think the possibility of having the compiler detect even the most primitive of such conditions a great advancement. Nice work. Given how easy it is to leverage LabVIEW to make parallel processes I think educating users into pointing out resulting races a very good step into making LabVIEW a more mature development environment.
  10. Careful. You most certainly will not have a full 4 GB to use. In practice I've never got close to the limit because dynamic memory allocations begin failing long before getting there. Chances are if you're using that much memory, it's not with a bunch of scalars. I get nervous when I see memory footprints nearing 2 GB for LabVIEW.
  11. This. For sure. I’ve been falling back to the MV/MVC architecture quite a bit recently. It. Just. Works. Some applications don’t even need a controller, hence the MV only option. I try to use either form when I can. Key point is the model doesn’t know anything about the views. It’s just an interface to which the views can get data. Used properly, none of the views need to know about other views, as far as they’re concerned it’s just itself and the model. This is how I like to see it happen: the model is some singleton resource, typically a data value reference. I try to keep the model passive. No actors or loops. If your model needs to do something in a loop, you’re already thinking about views that attach to the model. The model is the data interface and nothing more. The model will expose one or more subjects to which any number of views can subscribe. When a subject is changed the subscribers to that subject are notified of the change with some contextual information. For example, my model may have an ActiveItem property which can be altered. Some of my views care what this ActiveItem is so they subscribe as an observer to the property. When the ActiveItem is changed, they will be notified about it and be told directly via the notification what the new ActiveItem is. For some views this is all the information they need. Others may use the contextual data from the notification to further interrogate the model. It’s a nice mix of broadcasting small copies of contextual data by value, while large state information remains singular behind a shared resource. Each view only subscribes to what it needs and only copies what it needs to act. When I first started doing this I used to hard-code the notification mechanism, but found myself often creating brainless loops where the sole responsibility was to go translate one transport mechanism to another (convert a queue to a user event, for example). When going about this, do yourself a favor and abstract the subject/observer interface. The subject shouldn't care if the transport mechanism is a notifier, queue, user event, or some derived construct. Make the subject take an abstract class, and have the observer decide how it would best like to receive that notification. Is my observer an Actor? Fine, it will supply a concrete observer class that packages the notification into a message and shoots it off. Maybe my observer is a primitive UI loop? Fine, it will supply an observer that packages the notification into a user event. Maybe my observer is a remote object so we use a class which pushes notification over TCP/IP. You can change all this without any modification to the model or other views which already use the model.
  12. Back before I started using databases, I made the switch to 64-bit LabVIEW because one of our application's memory footprint was getting way out of hand. That's the only real reason to go to 64-bit in LabVIEW if you ask me. Databases have since solved that problem for us so we continue to offer 32-bit versions. It seems backwards to revert to 32-bit only deployments, so we stuck with supporting both architectures even though there's no real reason to do the 64-bit thing anymore. We mostly deploy to Win7-64. Before that existed it was Vista-64. We still have a substantial XP-32 base, probably more so than both architectures of Win8 combined. Despite 64-bit being our primary deployment target, all development is done in the 32-bit IDE on a Win7-64 architecture. We only spin up the 64-bit IDE to execute builds. We build mixed-mode installers that deploy the 64-bit binaries if it can, falling back on 32-bit if required. Things I can think of, in no particular order: Last I checked 64-bit doesn't have full support for all the drivers and toolkits. You seem to be aware of this, but wanted to make sure it got on the list. LabVIEW 64-bit for Windows is treated as a completely different platform from LabVIEW 32-bit for Windows. It's really no different than jumping to Linux or RT except both platforms have the name "Windows" in them. Different platform means recompiling. May as well start keeping compiled code separate from source code if you're not in that practice already. Alternatively just don't worry about changes that get made in the 64-bit IDE if you're only using it for building. Different platform also means all of your DLL calls via Call Library Function (CLF) nodes can get tricky. Depending on what you're calling there are few options: Worst-case scenario is you may need to wrap your CLF nodes in conditional disabled structures such that the right DLL gets called depending on platform. The exception to this is Win32 calls that just magically work due to WoW64. Seriously it's magic. Don't try to think too hard about it. If your DLLs are named appropriately, you may be able to get by with a single CLF node that figures out what to call when compiled. Be aware of the "Pointer-sized Integer" and "Unsigned Pointer-sized Integer" arguments for CLF nodes when dealing with pointers. Do not use fixed sized arguments if your CLF is going to adapt depending on platform. Use 64-bit integers when moving pointer data around on the block diagram. LabVIEW is smart enough to figure out what to do with a 64-bit number when it hits a CLF node with a USZ or SZ terminal compiled on a 32-bit platform. We have pretty strict rules against touching the TCP/IP stack, so I have no experience with VI Server between architectures.
  13. Having not had the chance to download your code... When I've done circular buffers I've only ever rotated the array if and when a subset is read and the rotation is required. It's expensive yes, but if all you're ever doing is reading a scalar, you can go the entire lifetime without ever rotating. Note this is from the perspective of non RT. You may wish to conditionally force a rotation on each subset read to reduce jitter.
  14. I thought the myDAQ was based on the Zync chip and had a small FPGA to play with? If not most of my interest in the device just evaporated.
  15. I think the confusion is that in this context you're not trying to describe a property of the VI itself, but rather its role in the execution system or position in the call stack/tree. When the VI returns does the execution system hand off to the next item up or does it clean up everything because it was the top? Top level VI vs (call) stack VI? Can't say I like that name any better. I'm skeptical a new name is needed provided the document properly explains the context of what "subVI" is referring to.
  16. I'm running off memory here on a project I last touched about a year and a half ago, so things are a little bit rusty. There is a thread though regarding the issues I came up against. The limitation that word lengths are limited to 8 bits and every word is a discreet set of clock pulses is a potential deal breaker, though it didn't affect me in the end. If you're pushing/pulling registers larger than 8-bits you need to combine discreet transfers which may introduce issues depending on the timing strictness of the device is on the other end. My biggest beef with the devices is they locked out the lowest level palette which in theory could do most of I wanted. Since I had a dozen of these things, I instead had to so some really ugly software triggering which I would never want to push into a production environment. Even if the low-level palette was unlocked it still has limitations on frame length which seem rather arbitrary to me and I question if I could have been successful with their other 8452 option. What I learned from working with those devices is they are really only useful for very simple operations. SPI is a low level protocol, so I expect to be able to configure the hell out of it. The NI API just isn't flexible enough, or if it is isn't documented well enough. The next time I need SPI, I'll be going down to the FPGA level (or perhaps another vendor). PS: Sorry for the highjack. I have no recommendations for your issue, Jim.
  17. Freezing for 30-60 seconds sounds like a timeout issue: specifically network or old physical media. While perhaps related, the issue in linked thread produces an indefinite deadlock. Also note that some posts from that thread for whatever reason are missing. Someone must have deleted their account.
  18. We have about a dozen of those devices. While neat and good for simple SPI stuff, we have found them rather limiting. For most applications we end up with some horrible hacks and software timing to overcome limitations in the NI SPI API, or at least the parts of the API that that device allows you to use.
  19. A thread started by GregFreeman has an example of where I would consider using the Preserve Run-Time Class primitive. It is indeed a rare VI, but has its uses.
  20. The behavior of the primitive is that if the cast succeeds, the object is passed through with no effect. If the cast fails, you get the default value of whatever run-time type is wired to the center terminal. This is really the point of the primitive-- being able to create generic methods where by using the primitive you are telling LabVIEW, "The output of this VI will always be the same type as the input." This is done automatically with dynamic dispatch paired terminals, but there are cases where you wish to do this with other terminals. Think for example how many of the LabVIEW VIs have a default value input terminal and a corresponding output terminal that somehow automatically adapts to the same type as wired to the input. File I/O, and variant attributes are two that come to mind which I use daily, there are many others. In object oriented programming, the preserve run-time class primitive is how you would achieve this behavior if you wanted to write such a VI. The important thing to remember when dealing with this is run-time object type is not the same as wire type on your block diagram. A wire can carry any object type that inherits from whatever type you're wiring. You don't even need to know the type of classes that can come in ahead of time. To More Specific Class tests against the wire type. The class it tests against does not change, it is set in stone as soon as you wire up that terminal. Preserve Run-Time Class tests against the run-time object type of the data on the wire. When it runs it determines the class that is wired up to the center terminal and evaluates if the input object is of the same class.
  21. One of our long deferred requirements on a project of mine is to support some decent quality vector graphics for our user interfaces that can better be used for publication etc. We use the EMF functionality for the graph and picture controls to mixed success for this: generally speaking the images are fine so long as no rotational transforms are used. Unfortunately most of the images have at least some rotated content (axis labels on graphs). I've been wondering about postscript. As far as I can tell the EPS exports from LabVIEW are just plain broken on Windows. Or is it just a matter of having the right software? Does anyone have experience generating postscript data from their Windows based LabVIEW user interfaces and have recommendations? Is LabVIEW EPS data even vector based where appropriate, or does it just use EPS as a container to paint a pre-rendered raster image?
  22. Hah. Got it. "Context" can mean many things, depending on...context?
  23. The only way I know of is to modify or rebuild the tree in response to the changing search string. I don't think you can hide individual nodes.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.