Jump to content

Louis Manfredi

Members
  • Posts

    274
  • Joined

  • Last visited

Everything posted by Louis Manfredi

  1. Hi Alphaone: As Yen says, you should be able to clear the breakpoint by clicking on the outer edge of the frame with the breakpoint tool. And ususally this works for me... But If I set the breakpoint, then turn off debugging in the "vi properties... executuion" panel, the breakpoint stays set (or at least the red rectangle stays there) and can not be cleared. Turning on debugging in vi properties let me again turn off the breakpoint. (Seems like a bug to me... the breakpoint ought to go away automatically when you turn off debugging, but perhaps there's a good reason for it to stay, haven't looked closely at this.) Anyway, Hope I've helped a little. Best Regards, Louis
  2. Hi Kalidas: I use a package called SnagIt. http://www.techsmith.com/ Fairly powerful, quite inexpensive, I'm happy with it. I'm sure there's other products out there & some might be better or cheaper, but for my casual use, SnagIt works pretty well. Best Regards, Louis
  3. Hi Skfink: Programs where everything affects everything are very scary :!: When I've written such programs, they rarely work the first time I try them, and tend to get worse the more I work on them :ninja: ... I usually end up chucking the whole thing, and carefully re-defining my objectives in a way that they can be broken down into somewhat independent logical chunks (sub-vis) that interact with each other in fairly straighforward ways. Don't be afraid to bundle logically related wires into clusters so they can be passed between vi's without a huge amount of clutter-- The processing cost of bundling and unbundling is almost nil, orders of magnitude less than the human cost of debugging a multi-screen program. Use globals if you must (preferably LV 2 style), even repeat the same calculation over again if passing the results of a calculation around makes for clutter on the diagram--- Do whatever you have to to break the program up into modules Good luck, but take it from an old codger, if the diagram is wider or taller than the screen, you'll need a lot of luck, and if its wider and taller than the screen, you'l probably need a lot more than just luck :!: Best Regards, :beer: Louis
  4. Hi Badwolf: You're right, it doesn't have to be quite that complicated. One of many simpler ways to do this is shown below. In the example below, I use a global, which is considered bad form in general,:thumbdown: but works easily for the case you've shown where the two vi's are not directly linked by one being the sub-vi of another, or by being sub-vi's of a common calling vi. A somewhat better approach would be to use a "LabView 2 style global"-- see past discussion of same in other threads in this forum. (Even if you choose to use the global as shown in my example, read the LV 2 global thread to understand what the risks involved are :!: ) Even better would be to have the vi's linked by one calling the other, or by both being called by a main vi, and then simply wiring the menubar reference out from the vi with the menu and into the vi that manipulates it, :thumbup: but perhaps you can't do that because of other logical requirements of how your vi's need to operate. Hope I've been a little help, and best regards, Louis
  5. Hi SC: Check the discussion in the following thread: http://forums.lavausergroup.org/index.php?showtopic=1368&hl= Good luck & Best Regards, Louis
  6. A good question, I'm interested to hear what others think too. To date, my business model is pretty much that I'm a hired gun programmer. I charge the client a fair price for my time, and they've got the right to use whatever code I write for them however they please. Generally speaking what the client wants is an executable to work with some specific piece of one-off custom test equipment. But if they want the source code they can have it too. They might want it either so they can maintain it themselves, or in case I croak :ninja: or do something else equally unbusinesslike. I don't mind talking on the phone with a client to support them modifying or maintaining my code (or their own code, for that matter). Unless the call is related to making a program written on fixed-price quote work according to its original specification, I don't mind billing them for the time, either. If the client thinks its more efficient to modify a program themselves (for example if I'd have to ride a plane to get to the hardware to fix it myself) or if they simply want to learn LabView, I'm happy to help. I re-use utility routines not specifically linked to a particular client's application. I tell the client that I'll be using previously developed utility routines for their application, and that I might use code I develop on their dime for other projects in the future (excluding, of course, anything that might expose their own proprietary interests). So far, none of my clients seem to mind this arrangement. Most of my clients are pretty focused on getting one particular task done, and none of my projects have had much potential for going into shrink-wrap. To date, my utility routines are pretty simple-- make a filename that encodes date and time in a way that data files automatically sort in chronological order in a directory listing, or write a sub vi's location and size to an ini file, so that when closed and re-opened the subvi shows up in the same place on the screen-- that kind of stuff. Routine stuff with equivalents probably available in a bunch of different open G libraries, stuff I'd share freely, even with a non-client. Lately however, I've developed some code that's pretty powerful and general, might be easily adapted for a variety of different applications, took a while to write, and might even be worth marketing as a toolkit library for others-- For that code, I suppose I'll lock the block diagram, and get involved with all those messy source code in escrow or licensing issues, so I'm curious what everyone else in the LabView community does & where the pitfalls are. Best, Louis
  7. Hi B. Same thing happens to me, LV 7.1.1, Windows XP SP2 Dell Latitude M60. Best Luck, Louis
  8. Seems like Flash memory is tricky in general... A client running a non-LabView RTOS in a custom hardware system had many units sold, in the field & running fine for some time. Suddenly new units started hanging. After much pain & troubleshooting, eventually turned out that the flash card maker had made a minor change to this supposedly fully interchangable commodity card normally used in cameras. The conditions under which the flash card sent interrupts to copy from buffer to non-volatile memory were subtly modified. So a timing error caused an entry to the error log file (stored in flash), which caused a flash interrupt, which caused a new timing error which... :headbang: Going to another flash manufacturer made the problem go away. (for a while at least...) Kind of neat gadgets these flash cards, but like anything else, gotta be cautious... Louis
  9. Wasn't there a time long, long ago when LabView case structures could only have two cases, TRUE and FALSE, or am I remembering an early version of some other language? (I know I once had no choice but to write a program which nested like that, but I can't remember if it was Version 2.x LabView or some early text language.) A(Edit: Of course in this example, and for all I know for the dimly remembered example in my own past, might be just as well off to index an array as to build the nested case structure.) Louis
  10. Hi Nick: Like Barrie I'm an old fart ... An like him, I started with a microprocessor where the whole program could fit in 4 K (In my case, less than that, perhaps just a couple of yards of paper punch tape.) There was no question of compacting the data as much as possible-- you conditioned the signal with analog electronics to use most of the range of the 8 bit a/d convertor, and stored each sample as an 8 bit word. Either that, or you ended up with less than 8 bits resolution. (In my case, the binary data was transferred to paper punch tape, had to carry it to a mainframe in order to do an FFT...) I agree completely with what Barrie said-- I might add one note from past experiences. I used to have programs which stored data as compact binary. I had conversion utilities which converted the data to printable ASCII CSV files. This expanded the file size by about a factor of 10. Funny thing is, if you subsequently put that CSV file through PKZIP, it ended up about the same size as the original packed binary. The Zip routines are pretty good-- If a data file has 10% information, and 90% fluffy formatting, Zip will pack it down pretty near to a factor of 10. So when computers got fast enough to do the binary-to-printable ASCII conversion on the fly for my applications, I started doing my initial stream-to-disk storage as printable ASCII.... Which I would subsequently PKZIP for storage. Convenient to send to others too-- no need to send them the clunky conversion program and teach them to use it-- Everyone has Zip or something like it. And not too long after that as storage costs continued to decrease, I realized that the time I was spending zipping data, and unzipping the data to search through it or use it-- fast as that now is-- cost more than the storage. Today I always store my data as printable ASCII. Perhaps backup and archive utilities pack it, I don't mind if they do, as long as they don't waste my time doing it. I might zip it myself if I'm attaching it to an email, but other than that, leave it in readable form. Concerning your question about searching and indexing the data-- I haven't a lot of experience with database searching, but it seems to me that writing the index file is well worth while, given the size of your data set. On the other hand, if the most common search is to find the data file associated with a particular coil, that implies that each coil has an unique serial number or a date/time code. Why not use the S/N or date code for the file name? Then all you need to do is search for the filename in Windows. Like Barrie, I've ranted a little, perhaps even rambled, but I hope my rambling has helped a little. Best Regards, Louis
  11. Hi all: Perhaps I'm missing something obvious .... but how come when I replace a two input OR gate with the compound arithmetic operator it goes to the Addition mode, rather than the OR mode :question: I often make this replacement, and I think in the ten years I've been doing LabView, it has NEVER been my intent to change the OR to addition-- its always been because I need another input to the OR gate. With all the advancements in LabView over the years, cursors that magically pick the right tool according to context, polymorphic VIs that change to what is needed based on what you hook to their inputs, and all that, you'd think an or gate would get replaced with an OR configured arithmetic operator. :ninja: ...or am I embarrassing myself by playing with this language for ten years without learning an obvious trick :question: Best Regards, Louis
  12. Hi Again: Pretty sure that in any particular version you can use save as... to save a vi in a for compatible to the more recent earlier versions. In other words, from 7.1 you can use Save As... to save a vi in the format for version 6.x. But you have to do it there... When version 6.x was written, no one knew for sure what was going to be in version 7.x, so version 6 can't possibly have a clue as to what's different about a version 7 vi, so it can't safely open them. Not to sound like a broken record, but best bet would be to get all systems upgraded to the latest version, if there is any way you can do that Otherwise you are spending a LOT of time trying to keep things under control with multiple versions of the language. :headbang: Programming is hard enough as it is... Best Luck, Louis
  13. Hi Pandaman: I might be wrong, but I think the automatic radio button thing didn't show 'till version 7.0 . I think that even in version 6.1 you might be able to pass references to the controls to a sub-vi, so that most of the clutter of the radio buttons logic ends up in the sub-vi. Let me know, if you need an example, I might be able to find one in an old program, given some time to look for it. Perhaps someone else can put their hands on an example without searching... Better still, use this as an excuse to convince your managment that they should cough up the bucks to upgrade you to version 7.1 Good Luck, Louis
  14. Hi Mark: Haven't had a chance to download and try the code yet, but looks like a really good solution to a problem that really needed fixing Thanks for posting it & Best Regards, Louis
  15. Hi Mqamar: One more thought-- Perhaps the device is sending more bytes than you think it is-- If you are expecting a message back, remember that the device might send you just the actual printable characters you expect, or it might send those followed by <CR><LF>, displaying the escape (non-printable) codes should help with this too. Best luck, Lou
  16. Hi Mqamar: May not be able to answer all your questions, but here is some information that may help: Definitely the baud rate of both devices has to be the same. Baud rate is the speed at which the communications between the two gadgets occurs (in bits per second) If they are not set to the same rate, they can't talk. (This isn't special to LabView-- just a basic truth of serial communications.) Sometimes devices will "autonegotiate" a baud rate. If you hook two systems up and have one try to talk to the other, it will keep trying different baud rates till it finds one that works. Sometimes that autonegotiation of baud rate can result in a bunch of junk collecting in the communications channel-- which might be important, see below. Sending a clear command to the power meter might clear the power meter, but there might be bytes sitting in the hardware input buffer of your serial port, or in a software buffer in Windows and a command to the meter won't fix that (might even make it worse, if the power meter acknowledges the clear command with a message.) What I often do, if serial communications doesn't work, is flush the input buffer-- Inside a loop, read all the inbound bytes (throw them away), wait a bit --10 ms?, and then loop, keep doing this until you read zero bytes-- Then you know the inbound buffer is empty. After that you can send out your command and wait for a response, knowing for sure that there isn't any junk sitting ahead of the response in your inbound buffer. Note that autonegotiation of baud rate can result in junk collecting in the serial port buffer, which might be the cause of the buffer overruns you are getting. Since that junk may have been sent at the wrong baud rate, it might be unprintable characters. Perhaps setting the number of bytes to read to zero is a code for reading all the bytes available, and you are getting a bunch of unprintable stuff (which you don't see) followed by the response to your *TST? command, which you do see. If you are displaying the message as a string on your front panel, try turning on escape code display to see all the unprintable stuff. (right click on the front panel indicator--the chocie should be on the list.) Not sure if this is what is happening for you, but hope I've at least given you some ideas to try. Doubt that the transmit and receive buffer size in device mangler is the central issue, but I would set both as small as windows lets you, and leave it that way until you get things working. Good luck, Louis
  17. I agree totally with i2dx on this. Bad enough to overlap simple native LabView controls, but overlapping Active X things seems burdensome to the system at best, and likely to be buggy at worst. Lou
  18. Hi Sarah: You can have multiple plot items within a single 3D graph object.-- Perhaps rather than overlaying graphs with single plot item in each, overlay the items within a single graph? The attached image shows a routine I've used to create the references to the multiple items in a single plot in a similar but different application of my own. Should sort of point you in the direction I'm suggesting, hope this helps, I can give more details if you need them and if this seems like a good approach to you. Good luck & best Regards, Lou
  19. Hi Folks: I'm doing a little project with DAQmx read analog 1D Waveform, N channels, N Samples. The system under test is a fairly slow process in a noisy environment, so I'm sampling the data at (for example) 600 Hz, and reading the data 300 points at the time, and block averaging those 300 points into a 1/2 second averaged frame. Calling the DAQmx read twice a second seems fine, but if I try to call it four times a second, things begin to get behind and the buffer overruns. Still have the problem at a lower sample rate, say 300 Hz sampling, reading the mx buffer 150 frames twice a second works fine, reading 75 frames four times a second gets behind. I could sample the data in larger chunks, and break them into block averages separately, but I'm also polling a digital scale through a serial port, and I wanted to keep the program simple by using the DAQmx read to pace the overall sampling loop, and thus keep the scale more or less synchronized with the a/d data. I don't recall in the pre-mx days that there was this kind of overhead issue in the old pre-mx AI read functions. (Which I'd rather not use because I'm relying on the mx for signal scaling, gain setting, and all that other stuff it does so nicely.) Any suggestions for things to try to make the DAQmx read faster? Thanks, & Best Regards, Louis
  20. Hi SolarBear: The key point is that a Sub-Vi doesn't, through its output terminals, return anything until it completes execution. In the same way that the FOR loop in the example Mballa posted doesn't provide any data to the outside until after it completes all 20 iterations. There are a bunch of ways around this problem. The simplest, perhaps, is to have your sub-vi that collects data collect only one sample, and add it to the display one sample at a time (like data is put into Mballa's "Wave Chart During" indicator.) That is not as efficient, and timing might not be as accurate, as collecting data many samples at the time, but if your sample rate is slow, it is the most straightforward approach. If life needs to be more complicated, you might collect a few samples each time you call the sub-vi, and add them to the display, or post the data to a Queue, and read it in a separate display loop as it becomes available, or a variety of other more complicated tricks. NI has nice example vi's for most of their hardware, your best approach might be to check those out, and start with one that you can modify to fit your needs. :thumbup: Good luck & best Regards, Louis
  21. Hi Colin: It is easy enough to open a file under program control, without having the dialog pop up for the user to select filename. Replace the "File Open" dialog in the block diagram with something like the attached snippet. The attached vi, used in the above snippet, creates a name string that is unique for each file, encodes the date and time, and sorts the files in chronological order when you display the directory sorted by name. But you can use any scheme you want to generate a unique name string for each file as it is created. Download File:post-1144-1114782727.vi Hope this helps a little. Best Regards, Louis
  22. Hi Azazel: I'm not sure that I can provide a direct answer to your question-- But I count something more than fifteen math functions (addition, multiplication, random number generation & arctan) that get executed 4,096,000 different times each. So (for simplicity) assuming each function corresponds to at least one floating point operation, it adds up to something more than 50 million floating point operations, which executed in around 9/10 of a second on my machine (1.39 GHz)... so I sort of suspect that the vi is running as efficiently as one could expect, and there isn't a memory management technique that would make it much better.... Not sure if you can reformulate the problem to get rid of the Arctans, but in the old days each one would have taken many, many clock cycles & I'm not sure of the degree to which modern processors compress that, so if you can figure out how to rid yourself of them, things might get better.... But this is just a guess on my part & I'll be curious to hear what others think. Good luck & best Regards, Louis
  23. Hi SC: LabView certification has helped me a good deal-- Even though I had 10 years experience with LabView before I took the certification exam, it was all in one industry, and I'd spent a lot of time doing non-programming things, so the certification gave me credibility that brought some consulting work my way-- certainly more than enough work to justify the cost and effort of getting certified. If you are ready to take the exam (be sure you go through the whole list of things to know posted on the NI site ) I'd say go for it. If not, it is certainly worth studying for-- In fact, I learned about a number of features I'd never used, and didn't expect to ever need, like Data Socket servers, and TCP-IP servers and, surprise :!: the very next potential client needed someone who knew that stuff & I was able to say "Sure I have experience with that" Some quick review and that client has become one of my steadys. I wouldn't hold up completing your master's or the job search to get the certification, however-- Its a time consuming & perhaps costly thing to prepare for (even with my experience, I ended up taking all but the basic NI course to prepare for the test-- though more painfully I'm sure I could have studied it all on my own.) If you are bright & young & have a nice shiny new MS degree , someone will probably hire you anyway, and may well pay for the cost of the exam-- if you're really lucky they might even let you study on the clock! Keep in mind that programmers are fairly common in the job market-- most of my work comes from people who know that I know about windmills, or strain gages, or test planning-- only a little of it comes to me strictly because I'm a certified LabView developer. If you search Monster for for LabView, you'll note a lot of work in specific industries-- for example, right now in biomed-- automated lab testing stuff & the like-- if that interests you, some minor courses broadening your capabilities beyond being a simple grunt programmer might be of more value than the LV certification. In any case, good luck & I hope my rather rambling answer has been at least a little help. Best Regards, Louis
  24. Hi Robert: I'm always willing to use as many loops as I need to get the job done, but very reluctant to use more than I need-- They've all got to be started and stopped when the program ends, and an error in one has to be handled gracefully in all, so there's additional work in parallel loops which should be avoided unless there is a reason. So, no arbitrary number is too many... often I get away with one, especially with state-machine architecture. But Right now, I'm working with a fairly straightforward program that uses three, although it could have been written with two. (The client added a file storage requirement to a previously written program-- easier to add a storage loop than to integrate storage with the other loops, though if I had been starting from scratch I would have probably kept the size to two loops.) Often, including the above program, I have optional display windows, and these will each add a loop of their own to the number of loops in the main vi. In your case, however, I suspect many of the loops are unnecessary: It seems to me that the Time loop, and the mouse scroll loop could be integrated into the GUI loop. the "main" loop could be split between the GUI loop and the test loop as appropriate-- so I suspect you only need two loops. Certainly, for example, if the GUI loop updates every 20-50 ms, I wouldn't expect it to be worth separating the time update from the GUI simply to keep the time from being updated unnecessarily often-- simply too much programming effort to go to avoid the unnecessary update frequency. But perhaps there are details in your program not shown in the outline that justify the added loops. Just my H.O., Best regards, Louis
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.