Jump to content

Justin Goeres

Members
  • Posts

    690
  • Joined

  • Last visited

  • Days Won

    10

Everything posted by Justin Goeres

  1. QUOTE (angel_22 @ May 4 2009, 05:04 AM) You can http://sine.ni.com/nips/cds/view/p/lang/en/nid/1010' target="_blank">get it from NI.
  2. I submitted a picture of myself to Dork Yearbook and they posted it. Anyone have pictures of themselves that attest to their geek cred?
  3. QUOTE (Aristos Queue @ Apr 12 2009, 12:34 PM) Now that you've brought it up, it would be pretty cool to be able to select an area of a diagram and say "highlight execution only among these nodes, but run the rest of the code without highlighting."
  4. QUOTE (Black Pearl @ Apr 12 2009, 05:08 AM) What security threat, specifically, is it that you're trying to prevent? If you're trying to secure the network traffic between your SVN server and clients, your clients can talk to SVN via https. You can set it up to use password authentication (which your clients can cache so they don't have to enter the password on every update or commit), or SSL keys. If you're worried about client security, like a laptop being stolen while it's got checked-out code on it, then keep the code in an encrypted area on the local drive (TrueCrypt, as you mentioned, would do this). If you're worried about the SVN repository itself being secure, it depends a lot on what type of machine you're hosting the repository on. That having been said, the repository is just a database that's made up of lots of files, so I would think that whatever standard security measures you normally use on the host OS (permissions, good passwords, turning off unnecessary services, etc.) would be steps in the right direction. Since you mentioned that you haven't worked with the SVN process yet, I'll also point out that the folder structure of your repository is defined inside the repository. The host computer knows nothing about what's in there. As far as the host is concerned, the repository is just a big collection of semi-random-looking data. QUOTE Does passwort protected VIs work with SVN (using Tags?)? Password-protected VIs aren't related to (or affected by) SVN at all. The password protection is just a property of the VI file itself that only LabVIEW cares about. SVN sees it as just another file. That having been said, I can't imagine ever using password protection on VIs that are under active development within a team. QUOTE How secure are password protected VIs anyhow? In general, password protected VIs are not secure for any reasonably rigorous definition of security, and if you really need to protect your code you will want to do more than just password protect it.
  5. QUOTE (crelf @ Apr 10 2009, 07:24 AM) Yep. Upon rereading my post, you're right. It was unnecessary.
  6. QUOTE (bsvingen @ Apr 9 2009, 11:37 PM) But there's the rub: "code that can potentially lower this reliability." You seem to believe that the mere existence of any undocumented features, or experimental features that are left in the software (or the runtime that it executes in) but not exposed to the user, automatically constitutes a lower-reliability system. I say unit test the code, and design the system so that no individual piece puts too much trust in any other piece.
  7. QUOTE (normandinf @ Apr 9 2009, 07:06 PM) Actually, if you stopped the loop before the 4th iteration the result wouldn't be the running average. It would be a weird weighted average of the first measurement (the one outside the loop) with the measurements from each time the loop executed before you stopped it. That having been said, if I were editing the CLAD I'd rephrase choice B as "The average of all measurements taken since the loop started running will be displayed." That's a clearer description of what it's trying to say. EDIT: Doh! Christian posted over me while I was typing!
  8. QUOTE (Darren @ Apr 9 2009, 07:20 AM) History shows that I choose between "octothorpe" and "pound sign" somewhat randomly, but with a weighting factor that skews toward "octothorpe" the more I drink.
  9. QUOTE (bsvingen @ Apr 7 2009, 11:12 PM) Please produce these so-called statistics. When the brakes fail on my car (or the electronic systems that control them), I still pull the mechanical emergency brake lever. QUOTE The other reason is that digital control enables the processes to run much more efficient because they are pushed far beyond what previously would be called a safe, or even operational state. Take for instance the fuel injection in any ordinary modern car (common rail TDI). If the controller shuts down, the engine stops in the same manner as if the crank broke, but this is an unproblematic event (for anyone except the driver ). A much bigger problem occurs if the controller does something wrong, like too much fuel, uneven fuel distribution, too much boost etc due to a software error. This would be the same as if the crank suddenly would change the relative angles of the pistons (not that it is possible, but it would produce similar problems). You act as if (1) there are no mechanical or electrical interlocks in place anywhere in the system to prevent the software from actually getting what it wants, and (2) there aren't independent subsystems on the car that impose other software-based interlocks on each other. I assure you there are. QUOTE All larger modern ships are controlled "by wire" with very sophisticated, almost AI-like navigational and maneuvering systems using GPS, radar, ultrasonic sensors, inertia sensors etc, most of them don't even have steering wheels, and those that do have it for show. A software error due to some hidden and undocumented experimental feature for the reason of "testing" something new, is completely out of the question. Airbus air liners use fly by wire, and has done so for decades already. It uses a system with several computers working in parallel, and there is no manual or mechanical backup whatsoever. Again, I dispute your claim that there is "no manual or mechanical backup whatsoever" in those systems. I will absolutely guarantee that there are thousands and thousands of electromechanical interlocks throughout those systems that exist solely to prevent the software from getting what it wants if it tries to do something stupid. That's not to say that something couldn't be overlooked and that a software error couldn't cause a catastrophe, but in fact those designs address that concern, too, by using multiple redundant levels of interlocks. Your continuing claim that there's nothing in place to anticipate the possibility of a software malfunction is patently false. QUOTE The point is, computerized control is as much a part of a system as the crank shaft is a part of an engine. This, we agree on. QUOTE The same reliability and quality is expected of the software in those controllers as is expected of the alloy content in the pistons, or the structural integrity of the ship's hull, or the main beams of the wings. Yes, but the fact that we demand reliability from the control software doesn't mean that we've stopped using mechanical and electrical safety systems. QUOTE Undocumented and unreleaved features in the software is as expected and assumed as you would expect to have part of the ship being secretely made of some unknown experimental paper foil because one of the engineers had a "bright" idea. That's what unit testing is for. For the record, I also agree with crelf that you're confusing undocumented features with experimental features. Look, I'm not saying that every developer in the world should go plunging willy-nilly into every undocumented (or experimental!) nook and cranny of their development tools. However, if what we're talking about is the ethics of undocumented or experimental features in LabVIEW, then the ethical risk of using those features is borne by you, the developer. Also borne by you is the ethical risk of shipping software that doesn't do what it's supposed to. So if you're not willing to assume the risks associated with hidden/undocumented/experimental features in a given situation, then you definitely shouldn't use them. And in any case, you should thoroughly test your code to prove that it works correctly. But you can't say that NI is a fundamentally unethical company just because these features exist. Unless, like I said, you intend to impugn every single software development group in the entire world.
  10. QUOTE (bsvingen @ Apr 6 2009, 12:56 AM) But that has nothing to do with the issue of whether Google Chrome has undocumented features. It has to do with the fact that Google Chrome, from a technical standpoint, doesn't have the capabilities to accomplish the task at hand. (Unless one of those "undocumented features" is "continuous monitoring and control of thousands of sensors and actuators, with multiple levels of redundancy and guaranteed real-time access.") QUOTE software", nobody cares about documentation of the compiler as long as the software works as intended. I write "software" and I care about documentation of the compiler a lot. QUOTE If you work as a consultant in engineering, you have no liability what so ever for the results you produce, and can use whatever software you choose. But if you produce and sell controllers for power plants, biochemical factories, car engines etc, you can be held responsible, at least in part if the controller faults due to any error (software or hardware). If it turns out that the error happened because of unrevealed/undocumented features, and you knew that the compiler had tons of them without saying anything, you are most likely in deep sh*t. I don't think that's relevant. In any significant hardware system with serious safety risks like you've mentioned, the hardware must always fail to a safe state. A well-designed system will do this without the software being involved. So the idea of whether the software has "unrevealed/undocumented features" is moot, because a good hardware design specifically assumes that's the case and works around it. QUOTE Another point is that now we know that NI is shipping software with lots of undocumented and unrevealed features consisting of random snippets of code from arbitrary software engineers, most likely with little or no experience in any real life industry at all. I think that NI would strongly disagree with that (although I can't speak for them specifically). The features aren't "random," the software engineers most surely aren't "arbitrary," and you have no idea about their industry experience. QUOTE In fact, it is so much of it that it would be too expensive to either remove it or document it. I understand that this practice allows a faster development paste, but this is purely a pragmatic reasoning. I mean, open source software allways has a "stable" branch that is fully "documented" (open) and does not contain experimental code. Any changes done there is to fix bugs. The experimental code is reserved for "nightly builds" and betas, but at least it is documented. What NI is doing is to include the experimental code in the stable branch with no documentation what so ever because the source is closed. I would prefer the open source way. IMO that is much better ethics. In terms of shipping a product, I usually prefer pragmatism over ideology, so I guess it doesn't bother me. To each his own.
  11. QUOTE (bsvingen @ Apr 5 2009, 08:30 AM) You're equivocating. Earlier in the thread you said "The only ethical correct thing to do is to document everything, also unrevealed and untested features. All undocumented features should be removed." Now you're saying that you can't (won't/shouldn't) use software with "tons of undocumented features." Which is it? None, or just less than "tons?" If "none" is what you really meant, I would suggest you put down the mouse, shut off the computer, and go find a new profession that doesn't involve computers in any way. Your operating system, your web browser, your word processor, and any reasonably useful development tool you can think of have hundreds, if not thousands of undocumented features. To say that this somehow implies that the entire software development world is fundamentally unethical strikes me as a rather novel idea. Either that or you're one of the most talented trolls ever to grace this forum .
  12. QUOTE (PaulG. @ Mar 19 2009, 11:22 AM) NASA already http://news.cnet.com/8301-13772_3-10097880-52.html' rel='nofollow' target="_blank">built their issue-tracking system on Bugzilla.
  13. QUOTE (Cat @ Mar 19 2009, 04:27 AM) Yeah, that's actually a pretty good explanation. So you're already like 50% of the way there . A key point (for me) is to remember that software is always an abstract representation of a system that you could (in principle) build in real life. And in real life, we interact with literal objects (like a scissors) that have literal properties (pointy, left-handed, jammed) and literal methods (cut). So it makes sense to impose rules on our software abstractions that sort of mimic the objects we deal with in real life, because it's easier for us to visualize them that way. To carry my metaphor way too far, if you were "cutting paper" in software, there's nothing stopping the paper from somehow knowing that the scissors is left-handed. That's essentially because computers are magic. But it's probably dangerous to let the paper know that, because the paper really shouldn't know whether a left-handed or right-handed person is cutting it. Indeed, what if you later write "Scissors 2.0 -- ambidextrous!"? Now all your paper will be broken, because it makes assumptions about the scissors. All it has to know is "I am being cut." Just like in real life. Again, there's nothing prima facie in the world of computers that prevents paper & scissors from knowing way more about each other than they really should. So smart humans have come up with programming paradigms that allow us to enforce restrictions on ourselves, to help us create better abstractions. OOP is the dominant way of doing that. So there. That's encapsulation and information hiding. Now it's someone else's turn to explain inheritance . And yes, at first OOP does feel like you're adding code on top of what you're already doing. In fact, it's possible that you're already doing good OOP-like things in your code to begin with. But using the native OOP takes some of the management work out of your code and puts it in the LabVIEW compiler.
  14. QUOTE (mesmith @ Mar 19 2009, 04:53 AM) That's a key point. Just to play the contrarian, do you really need to compress the data at all? If you don't compress the data, how big will your files be after six months or a year? Disk space is practically free. You can get a 1TB external drive for $100. Unless you're storing more than that amount of data (or need to transport it on a thumb drive or something), just buy one of those. If your data is uncompressed it's also easier to work with. You don't have to go through the gymnastics of going through a zip layer every time you want to look at it.
  15. The reason you can't get a block diagram reference in a built EXE is because VIs in built EXEs have their block diagrams removed during the build process. There's literally no block diagram to get a reference to.
  16. QUOTE (bsvingen @ Feb 25 2009, 04:17 PM) Dataflow code (i.e. what you write in LabVIEW) is fundamentally the expression of relationships between pieces of data. That's one of the key differences between dataflow and imperative programming. And those relationships (in this case, the wires) determine the execution order of elements of your block diagram. If the error wires, or any other wires for that matter, were hidden, you would be hiding key information about your program's execution from yourself (not to mention anyone else unfortunate enough to see it!). Just for starters, you'd be unable to tell if a particular new wire you create would form a cycle (circular data dependency). And if it did, you'd be equally unable to tell why your diagram was suddenly broken. At runtime, the error wire also contains critical information about the outcome of whatever operations preceded it. That information is used to make all kinds of decisions, as other people pointed out. In the process of making these decisions, the error wire can branch and merge in complicated ways that affect the execution flow of the program. Masking out that kind of information would break the entire relationship between looking at your block diagram and understanding how its code executes. Hiding the error wires would break the whole dataflow paradigm. It would be like hiding all the individual "if" statements in C code, but not the code inside the { } blocks that follow them, or something. I would suggest that if you really are having trouble with error wires polluting your block diagrams, you've got too much going on in your diagrams and should use more subVIs.
  17. QUOTE (bsvingen @ Feb 25 2009, 04:17 PM) Dataflow code (i.e. what you write in LabVIEW) is fundamentally the expression of relationships between pieces of data. That's one of the key differences between dataflow and imperative programming. And those relationships (in this case, the wires) determine the execution order of elements of your block diagram. If the error wires, or any other wires for that matter, were hidden, you would be hiding key information about your program's execution from yourself (not to mention anyone else unfortunate enough to see it!). Just for starters, you'd be unable to tell if a particular new wire you create would form a cycle (circular data dependency). And if it did, you'd be equally unable to tell why your diagram was suddenly broken. At runtime, the error wire also contains critical information about the outcome of whatever operations preceded it. That information is used to make all kinds of decisions, as other people pointed out. In the process of making these decisions, the error wire can branch and merge in complicated ways that affect the execution flow of the program. Masking out that kind of information would break the entire relationship between looking at your block diagram and understanding how its code executes. Hiding the error wires would break the whole dataflow paradigm. It would be like hiding all the individual "if" statements in C code, but not the code inside the { } blocks that follow them, or something. I would suggest that if you really are having trouble with error wires polluting your block diagrams, you've got too much going on in your diagrams and should use more subVIs.
  18. QUOTE (Aristos Queue @ Feb 25 2009, 08:15 AM) The worst part of this idea, in my opinion, is that it would've made a really great April Fool's joke. And now you've used it all up .
  19. My recollection is that it means the default value of that control is of a class other than the class type of the control. E.g. your control is of type Multimeter.lvclass but the actual default value of the front panel terminal is of something like Keithley.lvclass, which is a child of Multimeter.lvclass.
  20. I've never done the specific thing you're talking about, but if it's possible it will probably involve venturing into That directory contains the VIs that LabVIEW uses to (among other things) create accessor and override methods. If you poke around in there you will find .vit's for read & write accessor methods that you could probably replace with your own versions if you're into that.What you're talking about, though, is replacing the template for override VIs. I don't see a .vit anywhere for those, so it's apparently not as straightforward as the accessor methods. That having been said, there is a VI called CLSUIP_CreateOverride.vi that is probably implicated in the process. It's all very black-box because everything is password protected, but if you felt really really brave you might be able to work out how to bend things to your will. And if you figure it out, please tell the rest of us .
  21. QUOTE (crelf @ Jan 12 2009, 07:03 PM) It can be, but it's pretty much a write-once-per-project item (which you can make a fairly decent template for). Also, it can be part of a one-click build process that lets anybody on your team build the app (and documents the build process, assuming your code is commented). QUOTE but I'd prefer it if the LabVIEW installer builder didn't go around deleting files it doesn't own. "Life is never easy for those who dream."
  22. This has been a problem for me, too, for a long time. I eventually just gave up and stopped including installer directories in SVN. If I need to commit something, I commit a zip file of the installer. However, this topic reminded me that a similar problem exists when using Keynote (or Pages) on a Mac with SVN-controlled documents. Namely, that Keynote (or Pages) blows away all the .svn folders when it saves a file (this is not technically a bug for reasons that are beyond the scope of this board, let alone this thread ). A while back, Omar pointed me to a script that restores the missing .svn stuff that has solved that problem for me. It's a OS X(/Linux/Unix) shell script, but it could be pretty easily replicated in a Windows batch file. I don't recall exactly how it works, but the sequence of operations is essentially this: Find the target directory's location in SVN. Do an export of just the SVN metadata from that branch of the tree to a temp location. (I'm guessing at this -- it definitely doesn't take long enough to be doing a full export from scratch) Copy all the SVN metadata you just exported from its temp location into the target directory. This recreates the missing .svn's. Delete the temp files. When it's done, voila! The .svn folders have magically reappeared and all is right with the world. In the case of Keynote/Pages, I just run the script after I save and before I commit. It's a minor hassle, but one I can live with.
  23. My favorite part is "The aardvark asked for a dagger." Also, make sure to watch it twice so you can read the crawl at the bottom of the screen.
  24. I've seen errors like this in a few different situations: When I was using a cheap USB->RS232 adapter, and either the adapter hardware was no good or the drivers were no good. The solution was to try a different (and usually more expensive) adapter. When I was running a USB->RS232 adapter at a higher baud rate than it actually supported, and didn't realize it. The solution was to change the speed of the port. When the CPU my program was running on was really heavily loaded. (this is probably not your problem; I mean really heavily loaded, like 100% CPU usage for 30 minutes straight) I suppose it could also be a problem with your sensor device, but that's unlikely.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.