-
Posts
4,914 -
Joined
-
Days Won
301
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
No. Your DAQ definitely won't be enough to drive the motor. Motors are high current devices. You will need a separate power supply and control it via that. I agree with carlover in so much that really you need a dynamic tracking system (as well as it has been done before). You will also probably want to go backwards and forwards, as well as up and down (since the sun tracks in an arc and starts in the east and sets in the west every day) so a single motor probably won't be enough (2) and they will need to rotate in both directions. You might find this useful in thinking about what you want to acheive. Solar Tracking Project Although I'm at a loss as to why it took 13 weeks...lol.
-
QUOTE (Mark Yedinak @ Apr 29 2009, 09:53 PM) Read the thread again from 7 posts up.
-
Short answer "yes". We need to know more too. How are you interfacing to the motor (is it RS232 , PWM card, digital card, parallel port, force of will?). Regardless of the interface, you will need a while loop that generates a timed pulse. It'll look something like this There are many examples in the Labview examples directory. I would take some time to look at them as quite often you can modify them to suit your needs.
-
QUOTE (Mark Yedinak @ Apr 29 2009, 03:50 PM) We were talking about a full Labview development environment my comment was aimed at this QUOTE The LV development environment is very useful in debugging on the fly if you have live code. NI even supports it with a licensing option. Since the OP said he deploys the vi's and anyone can change them (have to have labview on there for that). That is more than $500.
-
Q's - Whats all the fuss about
ShaunR replied to ShaunR's topic in Application Design & Architecture
QUOTE (shoneill @ Apr 29 2009, 02:03 PM) Ah. Thats not it then. I'm looking for a way to link events from dynamically loaded vi's So that the vi can basically "hook" the existing event mechanism. -
Q's - Whats all the fuss about
ShaunR replied to ShaunR's topic in Application Design & Architecture
QUOTE (shoneill @ Apr 29 2009, 08:12 AM) I think that's what I'm missing for me to use events effectively from within an encapsulated architecture. Whats the workaround? -
QUOTE (manojba @ Apr 29 2009, 08:00 AM) Hmm. Got me thinking. This would be a good one for a Coding Challenge. Who could write the best URL Label XControl. Be nice to be able to just plonk one on the FP.
-
QUOTE (jzoller @ Apr 13 2009, 09:40 PM) I'm thinking more about the cost of deployment. Exes and dll's are don't require licensing.I'd find it impossible to justify that the customer pays thousands for a full package which he/she never uses just so that they can use my software.
-
Plugin architecture as exe in 7.1.
ShaunR replied to Black Pearl's topic in Application Design & Architecture
QUOTE (candidus @ Apr 29 2009, 09:02 AM) Much better solution. -
Plugin architecture as exe in 7.1.
ShaunR replied to Black Pearl's topic in Application Design & Architecture
The "Specify path on diagram" is the way forward but I don't think that exists in 7.1 (long time since I used that). So you would have to use the old method of creating an intermediary DLL which you can call from the "Call Library Function node" to pass data too and from other the other DLLs. (2D array in and out always worked best for me). Not elegant, but it works.When you create the plugins, if you make their inputs/outpts are a 2D array then your intermediary dll only needs 2 inputs (the dll name and the 2D array) and one output (2D array). -
Reading Cell Phone picture messages (MMS)
ShaunR replied to jbrohan's topic in Remote Control, Monitoring and the Internet
QUOTE (jbrohan @ Apr 27 2009, 02:30 PM) If I remember correctly, you can hook IE using active x to get click info. Might solve your positioning problem. I'll see what I can dig up if your interested. -
Q's - Whats all the fuss about
ShaunR replied to ShaunR's topic in Application Design & Architecture
QUOTE (PaulG. @ Apr 27 2009, 05:44 PM) You can always ignore us (like 90% of the lurkers ). Or you could read it during work time when your paid for it...lol. QUOTE (jdunham @ Apr 27 2009, 04:09 PM) Queues are cool that they can even the load if your data is bursty, but queue or no queue, if you're not processing fast enough, you have a problem that queues couldn't fix. Unfortunately my data isn't "bursty". It is multiple continous streams with control on top. No problems either. QUOTE (jdunham @ Apr 27 2009, 04:09 PM) I would never send data through a queue that wasn't meant for the queue listener. I would just use separate queues to send data to separate loops that were 100% devoted to handling that data. If I were to adopt the same strategy in the current project I would end up with 87 queues QUOTE (jdunham @ Apr 27 2009, 04:09 PM) OK, I give up trying to convince you. We obviously use queues in very different ways. I find them useful, even indispensable. My team's code doesn't suffer from any of those things you say queues suffer from, and we're doing things that you say are nearly impossible in LabVIEW, and they were pretty easy to code up and have great performance and scalability. Ummm. Are you referring to spawning multiple (200+) threads like in the BT client? I don't think that is possible. Especially if you can only have cores x 4 threads. QUOTE (jdunham @ Apr 27 2009, 04:09 PM) It's fine if you don't want to use more queues, but I don't think you'll manage to dissuade the rest of us. So I guess your in the Q or die camp (but suspected that quite a few posts ago) -
-
Q's - Whats all the fuss about
ShaunR replied to ShaunR's topic in Application Design & Architecture
QUOTE (jdunham @ Apr 27 2009, 09:41 AM) Quite the converse in fact. You would use a queue to "queue" ( or buffer) the messages/data/etc (because you can't process them fast enough) in the hope that they may ease up later so you can get through the backlog. QUOTE (jdunham @ Apr 27 2009, 09:41 AM) Why do you need wait functions if there is data available? Because you may be busy doing something else that is time critical and higher priority. QUOTE (jdunham @ Apr 27 2009, 09:41 AM) Why is the receiver inspecting the data? So many why's. I've already explained that one. QUOTE (jdunham @ Apr 27 2009, 09:41 AM) Why would you let a queue consumer get data that wasn't meant for it? That's what I call a bug. How can you not? Since there is data in the queue how does it know that its not for it? Thats what I call the nature of queues. QUOTE (jdunham @ Apr 27 2009, 09:41 AM) One way I like to think about it is that a queue is a 'dataflow wormhole' between parallel loops. Since you like dataflow as much as I do, maybe that makes it clearer why a queue is so useful. Don't get all Trekky on me now And I know why as queue is useful, in the same way I know that a car is useful unless you in a swamp. QUOTE (jdunham @ Apr 27 2009, 09:41 AM) Well, do a Google search on "Global variables are bad". I got 400,000 hits. They are considered dangerous in every language. I have been using more unnamed queues and notifiers lately so that they are not available globally. Do a search on "kill Bush" and you get 12,700,000. Your point is? This is what NI have to say on it: http://zone.ni.com/devzone/cda/tut/p/id/5317 QUOTE (jdunham @ Apr 27 2009, 09:41 AM) About my version of the sample application, I tried to add some asynchronous behavior to show the power of the queues. Did you read all the diagram comments? I though that would have answered your questions. It would be easy to modify it to match the original more closely. I took all the globals out because I think they are a defect, not a feature. For asynchronous behaviour they are already in while loops. If data is lost, then I'm not really sure what you were trying to demonstrate with the example. -
Q's - Whats all the fuss about
ShaunR replied to ShaunR's topic in Application Design & Architecture
QUOTE (jdunham @ Apr 26 2009, 06:29 PM) As lame as my weekends eh? we may have converged on a spin-off topic. But you are still quite a way from convincing me that queues are "the magic bullet" and all applications should use this architecture. QUOTE (jdunham @ Apr 26 2009, 06:29 PM) Same thing would work better with a notifier. No polling, no latency. Oooooh. No. Because then the UI would have to wait on the notifier(s). Also if you go down this path you end up with notifiers everywhere like a plague of mosquitos. A pain in proverbial to debug. All I have to do to debug is watch the global. There is no need for sequencing so a notifier is not useful. QUOTE (jdunham @ Apr 26 2009, 06:29 PM) Still confused why you started this thread! Queues are elegant and extremely useful. That's what the fuss is about. Where I think we agree is that if the system is NOT asynchronous and doesn't need event handling, then larding it up with an event handler or a state machine is not good. I started this thread after a comment in another thread that suggested a fairly simple implementation would be better as a queue. I asked whats everyone fascination was with queues on this forum. As a newbie to this site, as I looked at more and more posts it seemed everyone was queue crazy and the remit was...lets start with a queue and make it all fit. So Rather than hijack the thread, I started a new one to find out why people thought (and I think even you said this too) queues are "the magic bullet" to application design. All applications have some degree of asynchronicity that doesn't mean you should design around a queue. It sounds almost as if you are agreeing with me in repesct to queues adding complexity QUOTE (jdunham @ Apr 26 2009, 06:29 PM) Amen, brother. Dataflow is your friend. Easy to read, simple coding. But when you have parallel processes, dataflow doesn't apply. If they need to share data, you can use globals, which have lots of problems, or queues, which don't. The only problem with globals arises when you require synchronisation. But that isn't really a problem, it is (a very useful) feature of globals. The fact that it creates copies of data is a drawback (if you were running on a ZX spectrum...lol). But a queue isn't a replacement for globals as you point out in your example (removed the stop button from the user). And even in the first example it was used. QUOTE (jdunham @ Apr 26 2009, 06:29 PM) Well my app has a networked sensor array with dozens of concurrent connections, with data going in both directions. Each TCP or serial connection is handled with a lightweight process (diagram almost fits on one screen), which is cloned and spawned as a dynamic call. Each of these identical handlers feed their bytes to a central queue for parsing, analysis, notification, and logging. The comm handlers register their own queues so the central code knows how to send individual responses and control messages. Sometimes the comms appear and disappear at random (some radios are crappier than others), but the LabVIEW handles it smoothly. In short, it works great, and queues make it all happen. The peg's looking kind of round from where I sit. I did say that distibuted IO (which is I think what you are describing) is one scenario that would warrant a queue. However, yours is a large app as I seem to remember (how much of that is event management and interprocess comms?) and certainly realising the BT client wasn't a large app because Delphi is event driven therefore very suited to the task. Horses for courses. QUOTE (jdunham @ Apr 26 2009, 06:29 PM) I generally run these handlers with infinite timeout, so there is no timeout case (and no wait function). LabVIEW's internal execution scheduler may have to poll the queue somewhere deep in the bowels of LV, but that's not exposed to me. We have dozens of queues waiting in parallel with very low overhead and very low latency when an event is fired, so I suspect it's not polling at all, but is really sleeping. I don't know what you mean about a difference between queues and notifiers. As far as waiting and timing out, they act exactly the same. Queues do not have a wait function. Yes you can get similar behaviour IF you only ever have 1 message on the queue at asny one time, but if your producer(s) is faster than your consumer this is rarely the case. Therefore having an indefinite timeout makes no difference as your sub process still needs to inspoect the message to see if it was meant for it. So it is not sleeping, it is inspecting. You would then need a case for if the message wasn't for it and you can't just let it run through a tight loop so a wait so many milliseconds would be needed. So it's not sleeping evertime there is a message on the queue regardless of who it is for. QUOTE (jdunham @ Apr 26 2009, 06:29 PM) OK that's why I'm still on this thread. I rewrote your version using queues. No globals, no polling (except to simulate acquisition), no latency, asynchronous (event-driven), no possibility of race conditions or data loss, and no crowbar needed -- all thanks to queues. Queues of data, not of messages. And yes, it runs as fast or as slow as you ask it to run. There is a little bit of jitter at 1 or 2 ms, but I think that is caused by using the Timeout case of the event structure, not by the queues and notifiers. http://lavag.org/old_files/post-1764-1240766707.zip'>Download File:post-1764-1240766707.zip Ummm. It doesn't do what it says on the tin. It uses no globals, because you have removed the "Stop" from the user in SubVI1. The original example uses a global also. In the example I posted (ver 2), I set the aquisition time to 0ms and could acheive 3 ms of aquisition (mainly due to the 1ms delay I have in my notifier VI). All of the VI's show 3ms and all the vi's show the same loop iteration thus proving they are all capturing the aquisition "Event" with no loss.(well. The aquisition VI might show one more since it gets an extra go after you press stop on the aquisition vi). In yours, however. If I set it to 0ms the aquisition vi runs like the clappers (6229 for the length that I ran it) but the others show significantly less (2654). What is happening to the other 2654 samples of data? -
Q's - Whats all the fuss about
ShaunR replied to ShaunR's topic in Application Design & Architecture
QUOTE (Aristos Queue @ Apr 26 2009, 08:50 PM) Shorter ones first...lol. Even if there IS data in the queue but not for that "thread"? And (just a point to fill my knowledge gap) Are you saying that 2 while loops (or 50 even) will run in their own separate threads? -
Q's - Whats all the fuss about
ShaunR replied to ShaunR's topic in Application Design & Architecture
QUOTE (jdunham @ Apr 25 2009, 08:12 PM) What I actually meant was why do you use notifiers to receive updates when you are using a queue anyway for events. I've just read it back and see that I phrased it badly (the subject before being events) it was in response to the previous paragraph which finished with QUOTE (jdunham @ Apr 25 2009, 08:12 PM) they use queues to communicate with the various processes they need to effect. Often the receive status updates via notifier. I tend (generally) to use one or the other as in many cases as (you point out later) their features can be seen to be synonymous. QUOTE (jdunham @ Apr 25 2009, 08:12 PM) I guess a better question is how do you handle asynchronous activity in your programs? Do you poll for every possible status change on every component, or do you set up event handlers? I use an appropriate strategy. Not really answering your question though, I know, but I'm not sure of what you have in mind. I tend to think through the partitioning and use encapsulation to distill interaction between sub components to a bare minimum and make each aynchronous task as autonomous as possible. Generally it boils down to an error indication, start, stop and pause. But I will give you a very typical example of an every day scenario that would be very complicated using events/queues/notifiers etc. But Labview makes very easy and can be done it minutes. Lets assume the system is monolithic and only runs on one machine (a test/inspection machine for example) and say I have 32 digital inputs that indicate alarm conditions on of which only 8 of them are to be displayed on the user interface. I can (and do) arrange with the electrical guys that those 8 are all on the same port (I give them a spreadsheet with the pin numbers of the DIO device and define which sensors/outputs will be attached to what).Now I can have an asynchronously running digital input VI that all it has to do all day is read the status of those digital inputs and write them to a global as a 32bit number. Then I can easily mask them off in the user interface just by polling that global and (with the number to boolean function) show the user which ones they are in a 1D array of booleans or I can use the number as an index into an array of strings to show an error string. If you have multiple banks (like 128 in banks of 8) you can partition those banks for each hardware subsystem). Then your on-screen alarms become a 2D array of booleans, each row (or column) associated with each piece of hardware. Any other VI can also read that global if and when it wants to (none of that critcal section rubbish with Labview ). So if some other part of the system needs to now about it it can. Quick, simple and as I said takes a few minutes. Now. This test machine needs to feed a part from a bowl feeder into the test/inspection section. Again. I create a "Bowl Feeder" vi that requires a Start (give me a part), Stop (shutdown), Pause (wait a mo 'cos somethings wrong) and error (or no error if everything is ok). I don't need a response that a part has arrived because that will be given by a sensor. So I eventually end up with all these autonomous, asynchronous subsystems (each only requiring a start, stop, and pause). The start is usually from the main sequencing engine (single notifier if everything happens in parallel or one for each if they are staggered). The stop and pause can be a global (similar to that used in my previous posted example) and the subsystem handles its own errors and pauses all the other subs and the sequence engine while it figures out what to do about it and invokes the shut down procedure if it can't do anything. Now. Distributed IO is a different kettle of fish! QUOTE (jdunham @ Apr 25 2009, 08:12 PM) Well I found it difficult to manage an event-driven user interface in the same loop as several acquisition/control processes. If some of those processes had to maintain an internal state, it got even harder. If I wanted to reuse those hardware items in other applications, it was often easier to just rewrite the code. With a queue handling the inputs for a given device, I can have a modular program for that device which maintains any necessary state and accepts asynchronous event messages. I have always had the UI in its own loop. I used to "break" the link with globals and dynamic loading (the example I posted would have used globals instead of notifiers and would have been quite messy), now we have other tools like events, notifiers, queues, semaphores and the like. We have the tools to make more elegant solutions. QUOTE (jdunham @ Apr 25 2009, 08:12 PM) I have to admit it I got the impression that you rarely use them, and that you don't use all the features of them (that I and others have mentioned in previous posts). You'll have to forgive me, but I just don't see how anyone could use them regularly and then ask why others find them indispensable. I do rarely use them (in the way that they are often used). Not because I don't like them, but because they don't warrant the use in a lot of the systems I design (take the example above). I could use them, but a couple of notifiers and a small global do the job without having to define messaging scheme for interaction. If it were a distributed system or a supervisory system where a lot of information is being exchanged, then yes it would warrant it, but not just for a start, stop and pause. I'm an advocate of the choosing the right architecture for the specification rather than just throw a queue at it, which it strikes me a lot of people seem to be. QUOTE (jdunham @ Apr 25 2009, 08:12 PM) I am certainly open to other opinions. What architecture do you use for your majority of large applications where queues were not the best solution? Do you handle user input and environmental input (alarms, triggers, other state changes) in the same loop of execution? Do you pass messages between with some other mechanism? Do you run a monster event loop, and run everything in the timeout case, and make sure that every subvi can finish in 50ms or so? I would love to hear more about other architectures you prefer. See above QUOTE (jdunham @ Apr 25 2009, 08:12 PM) Sounds plenty big. Do state changes on one hardware device have asyncrhonous effects (triggers) on other devices? How do you handle that? If the user presses the STOP ALL/RESET/CANCEL button(s), how is that handled? In the above example and in the case of an alarm dialogue. It would set the pause flag while it is on-screen (if that's whats required or it might just log it to a file...depends). Crowbar all the subsystems and invoke the "graceful shutdown.vi" on a reset or stop. And un-set the pause flag for a cancel. . Triggers would probably be hardware and catered for by the the individual subsystem responsible for it. QUOTE (jdunham @ Apr 25 2009, 08:12 PM) My current project has about 4000 VIs, and probably 2500 are running at in the main app, with about 25 parallel static loops and 2 to 100 or more dynamically spawned loops, all of them containing at least one queue or notifier for message passing and synchronization. When this system grew to about 1500 VIs, we couldn't maintain the code any more until we refactored everything and used a lot more queues. I think that a much smaller app of only a few dozen VIs could still benefit from using queues if there are mulitple asynchronous tasks. That depends on how much info they need to do their job. I prefer autonomous encapsulation of the task as it makes the code a lot more understandable and intuitive (and less documentation ) QUOTE (jdunham @ Apr 25 2009, 08:12 PM) A queue-driven event handler is exactly the same, though if your handler is not fast enough, queue messages will stack up. In many of my loops this is not a big deal. In an interrupt I think the call frames nest on a stack in the same situation, and whether or not this can be tolerated is a design decision. I don't see this as much different, and certainly the dataflow nature of labview makes it harder to intervene in executing code at an arbitrary point. That limitation affects any kind of LabVIEW code. I think I see the where the difference in views lie. I don't see dataflow as a limitation. The fact that it is a a dataflow paradigm, means that you don't have to worry about state information. The function (or vi) gets executed when all its inputs have data. Its implicit. In event driven languages (I also program in Delphi and C++ by the way) you have to have a lot of state information to make the application work in an ordered fashion. This is unnecessary in Labview which is why it is fantastic for test/automation/inspection etc but sucks as a webserver (although NI would like to think its great). I recently wrote a bit-torrent application (lots of concurrent connections appearing and disappearing at random, asynchronous pipes etc). Wrote that in Delphi because it would be a nightmare in Labview. Square pegs don't fit in round holes. QUOTE (jdunham @ Apr 25 2009, 08:12 PM) Well I have a couple comments here. For one, I'm not a huge fan of over-using state machines. I always prefer dataflow. If I have to do four operations, I'd rather see four subvis in a row than four cases in a state machine loop. But other people really like them. The large app I mentioned above has very few state machines. Ditto. They hide functionality, can be difficult to conceptualise and break dataflow. That said. They do have the advantage that they compact code and enable selective branching. If used spareingly a definite asset. But often abused. I think were on the same page there QUOTE (jdunham @ Apr 25 2009, 08:12 PM) The second comment is that the example is just an illustration. When programming books talk about recursion, they almost always show the factorial operation. But no one ever uses a recursive program to calculate a factorial. It's just a common example because it's easy to get one's head around. So for you take this example to task is like saying recursion is useless because you could just calculate factorials with a for loop and a shift register (which is the right way to do it). First off. I don't consider the former to be the "wrong way" just "another" way. There are many tools at our disposal and we should choose those that are appropriate. As I think I said in the notes. It was to demonstrate that the use of queues adds complexity and obfusification. It took me about an hour to write that, debug, benchmark and comment it (with coffees). I expect the same cannot be said for the original (excluding the pictures and write-up I mean). It may have been an illustration. But there are many out there that would do just such an app that way. QUOTE (jdunham @ Apr 25 2009, 08:12 PM) Your example is a better solution to that problem, but we already established that the author wasn't trying to solve a problem so much as illustrate a scalable architecture. Better? No. They both fulfill the spec. I think one is easier to read and understand because it uses Labviews dataflow more and the other is more complex and difficult to read because it breaks that dataflow and adds complexity to compensate. QUOTE (jdunham @ Apr 25 2009, 08:12 PM) Your rework still uses a notifier which isn't any different (to me) than a queue. Sometimes you want multiple listeners (notifiers) and sometimes you want to guarantee messages won't be skipped if you fall behind (queue). When you don't care about either, then they are equivalent. If your entire thesis is that queues are not as useful as notifiers, then I think you're being silly. No. My thesis is that queues add complexity, are cryptic and (sometimes) require more VI's to be modified throughout the code. And that should be taken into account when designing a system since other alternatives can yield a more intuitive and easier to read solution and have better encapsulation. QUOTE (jdunham @ Apr 25 2009, 08:12 PM) I made a VI, which isn't too much different than other stuff you've seen but I'll post it anyway. It's an architecture that is working well for me. I would still love to see more about what is working for you in your large applications. Many thanks. One observation. It doesn't "go to sleep" it polls the status until it has something in the queue. So I'm guessing you have a wait so many milliseconds in the timeout case. That is the big difference between queues and notifiers. If you add another notifier (wait) to my example before the acquisition another after the AQ notify in the Subvi 2 (notify) and a third before the while loop in the main (also a notify just to kick off the aquisition in the first place) then you can run the system fully synchronised, flat out with no loss of data (3ms). You can't do that with a queue. See attached: Download File:post-15232-1240703966.zip And if you read all that your a star....lol. -
Q's - Whats all the fuss about
ShaunR replied to ShaunR's topic in Application Design & Architecture
I was perusing the internet this afternoon and came accross this: Queued Statemachines It seemed a very complicated solution just to acquire and display a graph. And an excellent example of why you should not use Queues for everything. They obfuscate and complicate. So (for a bit of fun - I'm sad like that) I decided to rewrite it NOT using queues but instead using more traditional methods. In fact, I thought I'd demonstrate notifiers at the same time. It can be improved but the purpose was to replicate the queue example and I didn't want too spend more than an hour of my life doing it. The result I'll let you judge for yourself. However it is a fraction of the size, fewer VI's, less complicated and extremely easy to understand unlike the queue example. The example from the above website. Download File:post-15232-1240670379.zip Alternative. Download File:post-15232-1240670495.zip -
Q's - Whats all the fuss about
ShaunR replied to ShaunR's topic in Application Design & Architecture
QUOTE (shoneill @ Apr 24 2009, 08:26 AM) Succinctly put! That is why queues are not events. You can even have multiple event structures linked to a single event (that's how I get round the local variable on Latch booleans). What (IMHO) would be a good enhancement to events is..... if you could define an event (say by right clicking on a control or indicator), plonk an event structure anywhere in your code (not just the same VI) and it appears in the list of events to link to. Then you could do things like when a DAQ digital line toggles and sets an indicator on your low level acquisition vi, an indicator on the screen toggles or an alarm dialogue appears or even kick off entire processes. It would make notifiers and occurrences obsolete and make rendezvous and semaphores actually useful. I would save weeks coding. We would also have an event scheme to rival any of the other event driven languages with the added bonus that we can also use dataflow. -
I'd like to be able to use "<" (less than) or ">" (greater than).
-
QUOTE (Roger Munford @ Apr 22 2009, 10:18 PM) Hmmmm. Not sure what your asking here. First of all, a fixed point number is an integer that is scaled by a factor (12.3 can be expressed as the integer 123 scaled by 1/10). If your raw data is only 2 bytes then your calibration data need only be 2 bytes IF you won't exceed the 2 bytes by adding it (65536). If by adding the calibration you exceed this you will require more (1 more byte will give you a ceiling of 16777216) . However, you only need to store 1 calibration number per measurement interface so if you have 8 analogue inputs, you only need 16 bytes to store the cal data for all inputs. Your results data will still be 2 or 3 bytes per measurement. This is where I'm confused. You say that if you "store the calibrated data", you have 4 bytes per measurement. But the raw data is 16 bytes. So it could be that your cal data is 4 bytes and you are doing 4byte (32 bit) arithmetic or your cal data is 2 bytes but you are doing 4 byte arithmetic. Alternatively are you saving the scale and offset for each result? In which case there is no need to. You can just store 1 cal and offset for each channel (2 bytes per channel so for an 8 ch analogue input that would be 16 bytes) and your results would still be 2 bytes (with the caveat outlined earlier). Another scenario is that you are actually converting to a Single Precision number (4 bytes) which is a waste of time as your data is 16 bits.
-
QUOTE (normandinf @ Apr 24 2009, 04:06 AM) That is incredible. They don't show the first (worst) bit though....the ironing.
-
Q's - Whats all the fuss about
ShaunR replied to ShaunR's topic in Application Design & Architecture
QUOTE (jdunham @ Apr 23 2009, 03:42 PM) Giving up already I believe the Event Structure to be the best innovation in labview since.....well...I can't remember when. Why don't they receive by a queue? QUOTE (jdunham @ Apr 23 2009, 03:42 PM) Occasionally I am asked to do maintenance from very old programs before I started to use queues (some of them pre-date the introduction of queues), and not only do they work less naturally from the end user perspective, but it is a pain to maintain the code since everything is in one big hairy loop or else there are lots of crazy locals and globals causing race conditions. In addition those apps are not as big, because those problems get exponentially worse as the program size grows, so there is kind of a limit to how big the app can get before it is too much work. I don't see why they should work less naturally just because they don't use queues. QUOTE (jdunham @ Apr 23 2009, 03:42 PM) Every time I experience this, I think, "Wow, queues are really the magic bullet to program design that makes large application development feasible in LabVIEW". I'm sure there could be other ways to develop large applications, but queues are very flexible and robust, and have a terrific API (unlike the horrible File I/O library), and have great performance. Obviously you have a different opinion, but I encourage you to try a big application with queues, and I think you won't go back. It's always been feasible (and fairly straight forward). You seem to be under the impression I've never used queues. I have written several queue based applications where that technique was the most appropriate (2 of them with a team of 8 Labview programmers). But I have written far more where it was not. So I guess I have gone back, and forward, and back......etc. I'm not sure of your criteria for "big" but I tend to gauge the scale of a project on how much hardware I have to interface too since that is probably 75% of the programming time on most most of my applications. The current one involves 12 motors, 8 marposs probes, 5 camera's, 192 digital IO's, 2 analogue inputs, 12 analogue outputs and 3 proprietary devices using MVB , CAN and RS485. This part of the project I would consider medium sized since it is one of 3 machines (which are similar but not identical) and will be part of an automated production line. -
QUOTE (neBulus @ Apr 22 2009, 09:47 PM) And means that you can convert n-dimensional arrays
-
Wire an indicator to the Standard out and you will see the result from your call to the DOS box. You don't get an error because the SystemExec successfully completed the CMD call. However, the DOS box may not have executed the copy command. If you look in the windows help, they suggest making a batch file with the command options and calling that. I personally would use a "Call Function Block Node" to call a windows API to achieve what you are trying to do. (Kernel32.dll has a SetFileTime function). Although there may be an easier/more elegant method.