Jump to content

MoldySpaghetti

Members
  • Posts

    9
  • Joined

  • Last visited

Everything posted by MoldySpaghetti

  1. Look at the numeric and graphical plot palettes. If you are in a hurry, things might go badly. I wrote a program to display a polar plot recently, but there's no way to really unify the method, and so I didn't make something reusable. Look at the polar plots. They did not meet my requirements at all, in fact we were talking about something different entirely. Usually, when an abstract concept as a problem is available and known, LV will get it in the next release. If it isn't in the vi.lib, chances are you need to code your own problem. i.e., there's no toolkit to solve it easily.
  2. Thanks for the replies, all. Rather than replying to every point, I'll recount what I've taken to heart as the biggest panaceas to my mental gout. Distractions: I greatly enjoyed reading the accounts of this. I don't even like going into the office if I can help it, because I hear other people talking on the phone, or talking about burritos, or playing miniature golf ... all of which are quite okay, but when I'm busting my head to grasp a larger concept, it kills me. I had taken to renting two rooms and using one for an office and developing from there, but that too has other problems. Hack & Check: Not commenting: I think a good architecture is worth way more than commenting. Too many times I've seen ridiculous comments saying something like "This saves the data to a file" inside a case that is labeled Save Data, yet has huge coupling problems. I'm not suggesting you do such things, but it's more a nod toward G in general. I've rarely found text comments to be helpful. I am guilty of making modules that "manage data and save them to a database", although I haven't done that for years. Oh you monolithic VI's you.... Maximizing screens: Maybe this isn't such a big deal as I made it. Crelf, I don't have references, I'm more of a gonzo journalist, I just say things and try to support them, and from there it's unsupported. I took what you said to heart, though. I have felt better using maximized screens, and I think I'll stick with them up to the point of saving. Save often: I have lost way more time from LV crashes than from doing something stupid, at least as long as I've had SCC. Maybe I should soften my stance on this as well. I guess I was expecting more vehement replies suggesting how idiotic some of my habits were. With this, I'm going to do something else for a bit, and possibly return and study some OOP via LV for awhile. Daklu, you may be amused to know that I was utterly demoralized a few weeks ago when I saw a LV user group presentation that showed some concepts in LV OOP. He showed (Elijah?) how to solve going from point A to point B around obstacles, what's the shortest path? Sounds simple, right? Well, he showed some classes that were linked-list, graph, etc. My initial thought: Wow, he wrote all of that code to solve THAT? Really? That set of modules looks awfully well written to be done in a short time frame. Am I really that stupid or lazy that I don't do the same thing?? God, I am totally unworthy. What I learned later: All of the infrastructure he used was part of an ongoing project to create better LV classes. -Ben
  3. Hi Thomas, I don't know quite what you mean, and I'm guessing others won't either. Some of the smartest people I've ever met, clearly more intelligent than myself, didn't have strong control of English, so don't let that deter you I'm even imagining a strong brazilian accent as I type this, describing 'mootexes' Your best bet would be to post a complete example of what you're talking about. And in fact, I've found that when I have a problem I can't solve, once I finally get to producing an example I could post to the web, I've figured it out. That's really the best way to get rid of language ambiguities ... just recode it and post it. You might even help someone else out in the process! -Ben
  4. These replies remind me of something as well. I absolutely loathe having the loading of settings from a file silently fail to update every relevant data structure. It's quite difficult to provide a full working revision system for system settings files, that can intelligently choose proper settings, let alone detecting a version #. My approach is usually just to have the system complain as loudly and obnoxiously as possible if it fails to load any parameter at all. If the customer is loading an old type of settings file, just moan like a cat that wants to come inside when it starts to rain. This might sound picky, but one can spend an hour or more tracking down an unexpected behavior from some parameter not being loaded, and going to default setting, that isn't what you expected ... and this becomes that much more brutal when you have assumed that it was correct, and is in fact a valid entry. There's nothing quite like wasting your time trying to figure out a bug in your software that doesn't in fact exist, but only appears like one because the proper settings weren't loaded. Less pedantic this time, I like to check all assertion booleans coming out of the INI file readers (or equivalent) and give a nice juicy complaint if any fail to load. And, even though I know this is bad architecture form, this is the one time that I throw error dialogs from a low-level VI because I hate the alternative so much. (Sorry to the original poster for the digression, but it's an interesting discussion )
  5. What is the array size that would be guaranteed to house all of the data in its largest form? If you know the upper bounds of this, and it's not huge, that's almost certainly your best bet. Depending on how versatile you want this to be in the future might determine how much work you want to put into the concept. Any method of flattening in this case is probably going to be roughly equal in that it's pretty easy to do, and will produce somewhat readable text, but I'm thinking that reading it back in and getting a consistent and correct result will be dicey once something changes in requirements. It sometimes turns out to be a personal preference, but in my case I've found creating a library of VI's for conversion to/from complex LV data structure to/from flattened and good CSV file saves more time in the end. This should actually be linked into the recently posted question about interview questions one could use to probe an applicant . There's not really a singular right answer. PS I'm wondering at a philisophical level about this statement: [One problem with this is that there is no provision in LV to save a 4D array to] Do other languages offer something in this regard? Off the top of my head, Python seems like the most likely candidate. It's interesting because it strikes directly into the land of architectural choices, assumptions, sacrifices, and possibly wrestling matches when there's bbq and alcohol involved.
  6. I feel your pain, I had a somewhat brutal education process with DSC a few years ago. It's an amazing tool, but it can be difficult to learn. You might be using an old version, since you bring up BVE. DSC is an ancestor of BridgeVIEW, and if you're using an old version you might see artifacts. I don't know for sure what BVE is, but there's a chance it has something to do with Bridgeview, since it was often referred to as BV. You might find it easier to deal with if you upgrade an old version to the latest. Shared variables made DSC far far easier to use. And the latest version of LV allowed you to search for all instances of a shared variable, which is something I did not have when acquiring a huge DSC project with many floating shared variables. Might sound trivial, but being able to see all instances where a global is written is killer on a huge legacy app. If you are only going to slightly modify it, you need to weigh the various costs of an upgrade. If on the other hand you think you are going to spend more than a couple weeks of full time programming on this, spending the time & money to upgrade to latest LV & DSC will probably be worth it. You'll most likely break the program and need to recode a few things, but the day or two it takes to do that will be worth mountains of saved debugging time.
  7. I once did something similar, where I did 40 minutes with a projector to convince a few C/C++ programmers that LV might be the way to go. It was in the form of challenges. Each was very short, and was written from scratch in the presentation. I'd stay away from trying to show just how LV works, if in fact it's the case that you're trying to show the utility of LV, rather than how to use it. Unless I'm actually being an instructor, I'm unapologetic about flying through tools and screens while they watch. I asked each question, and allowed a minute or so of conversation to happen before showing a dramatically short LV solution. Go ahead and let the participants bring up concepts like 3rd party graphics libraries, linking, compiling, design patterns to decouple UI from data models, etc. "Oh that's easy, all you have to do is import library X & Y, link them together, change method Z, recompile, then write a hello world with some binary shift functions." Topics: 1. How would you show the bit representation of a 32 bit unsigned integer as it increases? How long would it take you? (Using a delayed loop, using just the conversion of the loop counter to a boolean array). Sounds almost tautological, but this is not trivial to do in C with graphical representation. 30 seconds to build. This will get their attention at least. 2. I want to configure this Tek scope to trigger on a rising .5 volts and get the data back and store it in a CSV file. How long would it take you? How would you attack this problem? (Start with new project, use Tools->Instrumentation->NI Instrument Driver Finder, get driver. Create VI, then drop in initialize, configure, acquire, display on graph. Show this, then show a single conversion & save to CSV file.) [This isn't exactly what I did, but is probably an even better motivator] 3. I want to create a limit graph that users can annotate. How would you do this? How long would it take, and what would be your plan of attack? (Single VI. This one takes a little bit longer. Use waveform gen VI for sine wave data, plug into MinMax picture plot. Change amplitude control for waveform gen to slider, watch plot change dynamically. Then, if you're feeling bold, create a little paint program on top of that, just tracking the mouse & left-click and producing points on the graph. If you're not solid in LV, might want to either toss out the latter idea or practice it beforehand.) 4. Then show a couple examples that are prebuilt, to show that it can handle larger projects. I showed the robot arm, and the bouncing cube, and displayed the VI hierarchy. The latter shows that mathematical expressions can still be entered via text, kind of a middle ground for textaholics. They also show how an incredibly terse program can deliver rather complex and accurate results.
  8. There's more than what is listed, but I'm going to say the ones that pop off the brain at first thought. This might be useful just in simply being accountable for them ... if they're acknowledged, I might be forced to make more effort to do it the right way. A list of my bad LabVIEW habits that would behoove me to fix: 1. Maximizing screens 2. Compulsive saving (esp ctrl+shift+s) 3. Not using autotool 4. When encountering a bug that I can't figure out, I sometimes resort to hack & check style. i.e., change something, run it, repeat 5000 times. 5. Not taking the time to build my own toolbox. i.e., cleaning up and generalizing modules that worked well, and putting them into the palette. Extra notes: 1. This is going to be difficult for me to change. I wish I could spend some time watching some experts develop, because I find myself spending too much time putting the mouse to the scroll bars to go where I want in the VI when not maximizing. The best developers seem to all use non-maximized screens, so I'm thinking there must be something that I'm missing. I develop slower when I'm using non-maximized screens. But I see the rather huge incremental benefit for not maximizing, I'm just not doing something correctly. I'm currently not maximizing in any case where it isn't clearly warranted, but it continues to be painful, and I don't see a light at the end of the tunnel. 2. My first impulse when I do something that I fancy is brilliant is to save everything (Don't lose that thought!!!). I certainly don't need to tell YOU folks how erroneous this is. I'm a fairly compulsive SCC checker-inner as well, largely because of this--I use SCC to revert changes that I didn't like. 3. I've been suffering through using it for the past few weeks, still not completely taken with it. I think this was in a Coen movie ... "I'm a tab man, damnit!" I'm very close to just switching back to not using autotool at all. I don't feel like I'm losing time by hitting tab and spacebar many times to do what I want. My main problem with the autotool is getting the cursor/tool for selection, sometimes it's only a couple pixels wide, and selection point is in a spot I wasn't expecting. 4. This relates to the fact that sometimes the best thing I can do when I have a real brain-stomper, either a bug or just a difficult algorithm, is to get away from the computer and take a walk, or perhaps just sit down with a scratch pad and pen (is it strange for an engineer to dislike pencils?). 5. These are modules that aren't candidates for openG, but are at least clean and reusable enough that I could save time on new projects. General: -Most of these are actually not particular to LV, but I use that far more than any other language. -I wonder if particular bad habits are more or less common in people who are self-taught in coding vs those who had either more uni training, or simply working with superior SW architects? -I also realize that there's been reams of books and thoughts on these concepts, but sometimes just writing it out yourself is a better route than reading tech blogs in the pursuit of improving your own habits. And in fact, the latter can be an addictive experience, and can lead to days where you started out with a task in mind and ended up "waking" at 11 am and realizing, jeez, I haven't done anything! With that recursive irony in mind, an article that I actually dogfooded was Graham's artice on removing distractions, particularly cutting your dev machine off from the internet: http://www.paulgraham.com/distraction.html ~~~~~~~~~~~~ Any bad habits you'd like to acknowledge?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.