Jump to content

shoneill

Members
  • Posts

    867
  • Joined

  • Last visited

  • Days Won

    26

Everything posted by shoneill

  1. QUOTE (dirkop @ Jan 30 2009, 09:05 AM) That's what captions are for. Assuming the PDA version works the same as the desktop version, right click the control, hide the label and show the caption. The label must remain constant at run-time but the caption can be changed willy-nilly. Shane.
  2. Being an experienced LabVIEW programmer is always a good place to start. Shane.
  3. shoneill

    Alfa String

    QUOTE (alfa @ Jan 14 2009, 09:22 AM) Get help.
  4. shoneill

    Windows 7

    QUOTE (jgcode @ Jan 13 2009, 05:21 AM) I feel old. I remember when we were calling XP bloated and slow. Shane.
  5. QUOTE (TobyD @ Jan 12 2009, 08:36 PM) I'm gonna sign up for the CLD soon, with an eye on CLA... I'm pretty sure on the CLD, that should be do-able. I'm even going to take a course beforehand! :ninja: How "advanced" is the CLA exam? I know there are examples for the CLD on the web, but how "architectural" is the CLA exam really? I'm pretty fluent on state machines, queued state machines and so on, is that the kind of thing tested? I suppose I could just do the courses to find out.... but that's more expensive than a question on LAVA Shane PS Rude of me to forget: Congrats to everyone who passed an exam recently.....
  6. QUOTE (Jeffrey Habets @ Jan 11 2009, 12:31 PM) I did something which would probably work quite well for something like this. I had an application with a large number of images to be processed which belonged to a single production run wafer. I needed to read in the images, perform analysis on them and give some kind of a visual feedback. I did this by using an image control, drawing the objects at the appropriate positions (after reading the size of the picture visible and using these as bounds). It's then possible using the "mouse move" and "mouse down" and so on Events to make the individual objects (not in a LVOOP sense) dance as you would like. For me, it showed a preview of the image being analysed by simple moving the mouse over the wafer and at the same time highlighting the relevant object in the picture. Clicking on an item highlighted it and locked the display to that image until un-selected..... The whole thing took me a few days to implement, but if done right (as a component communicating asynchronously) then it's very re-usable. Depending on the functionality sou require, you can link all kinds of events to respond to user interaction. It requires a bit of advanced programming, but the results are cool. A logical step is making the displayed objects real LVOOP objects (Draw, Highlight methoids and so on) to allow for custom displays. Shane.
  7. QUOTE (sydney @ Jan 6 2009, 08:07 PM) No you can't. I had a look at this once and it's not a trivial task to implement. Buy MSOffice, you'll save yourself a lot of time and nerves. Shane.
  8. Whenever I try to do design an object hierarchy like that, I always end up missing something. So I end up starting with an educated guess at the object structure. I suppose I'll get better with experience. It tool LVOOP to make me actually understand the benefit of OOP. I was always one of those who thought "What's the point of OOP". I think I've more or less grasped it now though. Not that I don't have heaps still to learn. C'est la vie. Shane.
  9. QUOTE (Val Brown @ Dec 18 2008, 09:25 PM) I might be missing the point of the post here, but LVOOP is cross-platform. I've just recently used a LVOOP project created on a Windows machine under Mac, no code changes required whatsoever. Works great. Works better actually because of the Mac's .app directory magic. Shane.
  10. QUOTE (mesmith @ Dec 18 2008, 05:15 PM) I'm no OOP guru, but how will implementing by-ref LVOOP lead to portable code? I personally like LVOOP being by-value. It maintains a constant behaviour throughout LabVIEW. Using the Vision toolkit leads to a lot of bugs for me because switching between by-val and by-ref (mentally) is not easy within the same code (not for me anyway). And yes, I know the limitations of a by-ref vision toolkit and that there are good reasons for a by-value approach. I just don't find it a very good fit with normal LV practises. As we've already heard from a few posts in this thread, it's also possible to make a by-ref implementation if necessary. What's the problem with that? Shane. PS I just read the post leading to this thread..... I think the abilities of LVOOP are being treated in kind of the same way LV was treated in the beginning - "But it can't do this, but it can't do that". Well yes, LV is still geared towards a primary type of application (Even though the barriers are becoming less clear all the time) and I can fully understand the implementation behind LVOOP. I personally much prefer it this way than forcing a whole by-ref approach. Each approach requires a bit of time to get used to it, but can someone give a concrete example of something which CAN'T be done with LVOOP which could be done with by-ref LVOOP? And when I mean "can't", I don't mean "awkward" or "not automatic", I mean "can't".
  11. Having never successfully programmed an XControl, I'd say they are pre-destined to become the UI representation of a LVOOP Class. I think they complement each other perfectly. LVOOP by design doesn't HAVE a UI component (Not directly anyway). I think an XControl for a LVOOP class is a very good extension of LVOOP abilities. Again, I might be way off. Shane.
  12. I reckoned it was LVOOP alright. Wouldn't want to attack the code otherwise. "The class diagram done, most things were simple". Ah, but therein lies the trickery of LVOOP. Getting the classes "done". Especially for Plebs like me who have trouble thinking in OOP. Shane.
  13. Interesting. How many man-hours did that software take to produce? Looks relatively complicated.... Nice example of what can be done with picture controls though. Shane.
  14. QUOTE (andymatter @ Dec 11 2008, 01:00 AM) Yes I think you're missing something. If you want to do BULK reads, you simply use the standard VISA Read function wired with a USB RAW VISA reference. This function has a context menu allowing to choose between Sync and Async modes. Should this not do exactly what you want? http://lavag.org/old_files/monthly_12_2008/post-3076-1228960482.png' target="_blank"> Shane.
  15. Well assuming you're talking about BULK transfers (VISA doesn't support Isochronous mode transfers) then using the standard VISA Read and VISA Write will get the job done for you. I've never tried it but these VIs have the option for asynchronous or synchronous mode. That would certainly be the easiest thing to try. It would allow a quick change from async to sync and back to compare speeds. If you're using something other than BULK transfers, things would be a little more interesting. For Interrupt transfery for example you can create more than one listener so that when one listener is busy processing data, others are still actively monitoring the USB port. I wrote a tutorial over on the NI site. Maybe you'll find an answer there. Shane.
  16. QUOTE (Aristos Queue @ Dec 9 2008, 09:08 PM) It must be fascinating living outside our normal time-space continuum.... :worship: Shane.
  17. QUOTE (Aristos Queue @ Dec 9 2008, 04:51 PM) Hmm. Interesting. I knew the 24-bit part, but this extra info on the remaining 8 bits sounds quite interesting. Is there more info on this? Shane.
  18. shoneill

    NVIDIA CUDA

    And as if someone had read my post from earlier: Khronos releases the OpenCL 1.0 specification. Apparently ATI and NVIDIA are on-board. OpenCL should be in Mac OS X 10.6. Shane.
  19. QUOTE (vugie @ Dec 9 2008, 03:25 PM) In LV 8.2.1 the VI isn't protected..... May be faster for a single value, how about an array of U32 values? Shane
  20. shoneill

    NVIDIA CUDA

    Won't Larrabee, Fusion >insert marketing name here< and so on make CUDA a short-lived thing? Aren't we moving towards general-purpose GPUs already so that a single standardised interface (à la OpenGL) would be the way to go? Maybe Nvidia will have an epiphany and make CUDA an open specification..... :headbang: I remember the early days of 3D acceleration in games where there different game binaries for each graphics card. OpenGL was (and may again be) a solution to those problems. Otherwise the idea is fascinating. Levenberg-Marquard optimisation on my GPU. That would be cool. Shane.
  21. QUOTE (vugie @ Dec 9 2008, 12:38 PM) I set the VI parameters for this to "Subroutine" ages ago. That seems to speed things up a bit. Any disabling debugging and so on and so forth. You can also cast a U32 to a cluster containing 4 U8 instead of using the sub-VI. It's actually faster if I remember correctly. To speed things up you can also cast an array of U32 to an array of clusters with 4 U8 each and then process the data of the U8 cluster array individually. Shane.
  22. QUOTE (rolfk @ Oct 25 2008, 08:32 PM) Yup, using splitter bars and setting the scaling behaviour of individual frames is probably the easies way to do things. Did this recently and was amazed how much code it saved.... Shane.
  23. QUOTE (Aristos Queue @ Dec 8 2008, 09:04 PM) Ah but here we have a definition problem. How intelligent? Is being able to read and write enough? I've had face-to-face conversations with people and have come away certain I could distinguish those people from an intelligent human being any day of the week... Shane.
  24. QUOTE (God @ Dec 8 2008, 04:33 AM) Maybe I'm being a stickler, but I would have expected the creator of the universe to have better control over the language and the keyboard... Shane.
  25. QUOTE (jed @ Nov 19 2008, 05:19 PM) Don't know about the 416 specifically but I have used an Edgeport in the past. Worked perfectly. Some USB adapters have problems with one or more of the "Flush buffer" commands for the serial port. It didn't, it worked just as a built-in serial port. Shane
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.