-
Posts
1,824 -
Joined
-
Last visited
-
Days Won
83
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Daklu
-
Understanding the scientific method and being a scientist are two very different things. When they say 31k "scientists," the impression is that these are PhD's, researchers, etc., who have some level of expertise in the area being discussed, not a bunch of reasonable intelligent people whose knowledge is based on second and third hand information gathered off the internet. Having an opinion requires no qualification whatsoever, other than perhaps a pulse. But the question isn't whether I'm qualified to have an opinion, it's whether I'm qualified to sign the petition. The petition specifically says, "There is no convincing scientific evidence..." I cannot, in good conscious, sign that petition because I don't have the expertise, background, or understanding of the body of climate research to make that claim. In my opinion, people who have signed the petition without a full understanding of the science and research aren't much different than the scientists at the CRU. Both are using disingenuous methods to pursue political goals. So to answer your question, I get to say whether or not I'm qualified to sign the petition. By the standards I set for myself, I'm not. If you feel you have done the research and have enough knowledge to sign the petition, I am happy for you. I wish I had the time to delve into the subject enough to satisfy myself. My comments were not intended to be a slight against you or anyone else.
-
I've been working on developing a state diagram to document what our application has morphed into.* The main loop uses a QSM architecture with ~35 states. Some of the states queue up several states to execute in sequence. (Woo hoo! A queued sequence machine! ) Some of the states don't queue up any states after executing, relying instead on a previous state to drop enough states on the queue. (This is the kind of thing I was thinking of when I referred to a "procedure machine" here.) Some states queue up a new state under certain conditions, but drop to idle under other conditions. Trying to map it all out is dizzying. My question is about those states that do not queue up a new state. As I understand it a state diagram shouldn't have a single state listed more than once. How do you go about diagramming a situation where the previous state is one of many possible states and the following state depends completely on what was already put in the queue? I have the distinct impression what we ended up with may be a state machine architecture but it's not really a state machine. For one, I assume that for it to be a state machine you have to be able to model it. Nevertheless, I still don't grok the Moore vs Mealy state machine models or how they apply to software, so maybe what we have is a Mealy machine and it is possible to model it clearly... *Yes, I know the diagram is supposed to come before the code. Managing this headless chicken of a project was out of my hands. Best I can do at this point is try to contain the blood.
-
No worries... I'd get it to you one way or another....
-
I read through that link a bit and calling them all "scientists" is... uhh... disingenuous. Apparently all you need is a "formal educational degrees at the level of Bachelor of Science or higher in appropriate scientific fields." 2.6k signers are mechanical engineers. I'm a mechanical engineer and I certainly don't feel qualified to sign the petition. 500 aerospace engineers. 7.2k general engineering degrees. 2.1k electrical engineers. 400 metalurgists!? Huh? Entomology? Animal science? Medicine? I believe some people who signed the petition reviewed the available data and are qualified, but there's no way I believe all 31k of them are.
-
I mostly use Star UML. I've also used Visio but to be honest it is a terrible UML tool.
-
Cool trick with NI-Library! That's definitely kudo-worthy! It never occurred to me to copy the template over to the layers either. If you had posted that in a separate message I could've given you two kudos!
-
The purpose of the OOP Design Challenge is to learn about different design approaches to solving a particular problem. By sharing our ideas and discussing the pros and cons of different solutions, hopefully we all become better informed of the consequences (both good and bad) of different approaches and more confident that the decision we make is correct for our situation. It's important to remember there is no single "right" answer. If the code meets the minimum requirements it is "right" by definition. Different solutions will provide flexibility in different areas of the software. The value is in discussing the tradeoffs associated with each design. This is not intended to be for advanced programmers only. Programmers of all skill levels are encouraged to contribute ideas and ask questions. You don't need to have a complete solution worked out. If you have the beginning of an idea but you're not sure it will work, post it! That's what this is for. Although I have titled this "OOP Design Challenge," it is open to any kind of solution you think makes sense. It can be based on OOP, libraries, action engines, xcontrols, .Net/ActiveX components, or any other scheme you dream up. ------------------------ So... challenge #1 generated about as much interest as a McDonald's at a PETA convention. My fault. The problem was too big, too complex, and not described well enough for any meaningful discussion. Let's try a different problem I recently encountered. This one should be much less complicated; hopefully it will generate some ideas. Scenario: We have written a test sequencer for an automated RF test station. All tests the sequencer can run derive from an abstract parent test case with Init, Execute, and Abort methods. The station can be configured to test up to 4 devices sequentially by using a switchbox to route the RF signals from each DUT to the appropriate test instruments. The (simplified) sequencer program flow is: FOR EACH deviceUnderTest IF deviceUnderTest.isEnabled = True THEN FOR EACH testCase IF testCase.IsEnabled = True, THEN testCase.Execute(commonParameters) // commonParameters can be modified to include additional data as needed NEXT testCase END IFNEXT deviceUnderTest One source of measurement error is that the RF signal losses vary according to test position. We have an external calibration process calculate the signal loss along each applicable endpoint-to-endpoint path and record it in a calibration file. The table for each unique endpoint combination is in the form of a frequency-signal loss key-value pair. In other words, the DUT1-Instrument1 path will have a table of frequency-signal loss pairs, the DUT2-Instrument1 path will have a different table of frequency-signal loss pairs, etc. Problem: The test cases need to pull values from the appropriate table in the calibration file to apply corrections to the instruments and measurements. What's your strategy for adding this functionality into the sequencer? The current DUT number is passed to the test cases as an enum in the commonParameters cluster. DUT number does not change during a test case. The test cases "know" what instruments they need to connect to, however, a test case may use more than one instrument and need information from more than one table. We can influence the data format and type of file the calibration process creates as long as it remains human readable, so feel free to extend your design into that space as well. --------------------------- I'll post my implementation in a couple days.
-
Good video. I hate going through and changing the class colors. Your templates will certainly save me time. I've struggled with using the templates as they don't always behave the way I intuitively expect, especially when adding text. I've seen some discussion about it elsewhere but I don't remember what it said or where I saw it. As a completely unrelated aside, I didn't know you are Aussie. For some reason I had it in my head you're in North Carolina, though as I look around LAVA now I don't know why I thought that.
-
Mark, do you create new log file singletons for each application or do you have a general one you reuse?
-
Labview anti-pattern: Action Engines
Daklu replied to Daklu's topic in Application Design & Architecture
Lots of good comments! Too many for me to quote and reply so I'll respond in general... First, I'm not sure I made it clear in my original post. I believe there is a clear difference between function globals and action engines. Functional globals are okay as far as I'm concerned. It's the action engine part that I believe is a problem. Also, there are two different questions here: 1. Should AE's be exposed as part of an api, and 2. Is it okay to use AE's inside a lvlib? My original intent was to address exposing AE's as part of the public api, though I do address both questions below. Predictably, my answer to both questions is no. (With the possible exception of wrapping AE's in libraries as a way to transition the programmers from public AE's to classes.) Two thing are required for something to be considered an anti-pattern: Some repeated pattern of action, process or structure that initially appears to be beneficial, but ultimately produces more bad consequences than beneficial results, and A refactored solution exists that is clearly documented, proven in actual practice and repeatable. From my point of view, AE's have not always been anti-patterns since it was the only way to create the necessary behavior. It's the recent development of classes that relegate AE's to the anti-pattern category. To address the idea of embedding AE's in libraries and exposing public methods, you can get many, but not all, of the benefits of classes by doing that. My question is, if you're going to go to all that effort, why didn't you just make it a class in the first place? That's what it's for. The two main differences between libraries and classes are inheritance and state information. Classes are designed and intended to maintain state information. When I'm using a class I expect it to maintain appropriate state information. Libraries do not have an inherent ability to maintain state information. When I'm using one I expect it to not maintain state information. I expect every call to be independent of every other call. By maintaining state information in a library you are creating a construct that behaves counter-intuitively to common programming paradigms, plus you forever lose the advantage of inheritance. You can easily create a class in a way that allows you to "wirelessly" pass data around if that's important to you, and although personally I find myself moving away from those implementations, it's available if you want it. If you implement a class using a private action engine, that's your business. As a class user I don't care about how you implement the behavior, I only care that the behavior remains consistent. (Though I do think by using an AE internally you are making it harder to maintain the class.) Getting back to publicly available AE's, there are other reasons why they are in general a bad idea. One of the prinicples of OOP is the "Open-Closed Principle." This prinicple says that code should be "open to extension, but closed to change," meaning it's okay to extend a class by adding new data and methods, but you should avoid changing code and data that already exists. Any time you change pre-existing code you are creating an opportunity to introduce bugs in code that has already been debugged and validated. Now you have to debug and validate that same code again. In the best case scenario you have a complete set of unit tests that you can run to verify correct behavior. In the worst case you unwittingly break important code someplace else and don't discover it until sometime later when correcting the mistake is much harder. You are severely limited in your ability to add functionality to an AE without violating the intent of the open-closed principle. The only circumstances in which you can is if your function selector is a string, integer, or some other native data type AND you either have an open con pane connector to accept the new data or the new data matches the format of an already existing con pane connector. Do you use an typedef'd enum to select the AE function or a super cluster to encapsulate all the data inputs? Out of luck. Those types are part of the api the AE exposes. Changing those types is just as unpredictable as, say, changing an I32 to a STR. -
Labview anti-pattern: Action Engines
Daklu replied to Daklu's topic in Application Design & Architecture
This is an excellent idea... I had not thought of that. I believe OOP in and of itself is not hard. Most people understand the concepts pretty quickly. What's hard is figuring how to design it correctly. As I review the OO mistakes I've made over the past several years what it almost always boils down to is that my api was wrong. (Usually by trying to make a class do more than it should.) OOP forces you to make api decisions that could have long term consequences. It's making good api decisions that is hard, not anything inherent in OOP. By wrapping all your AE's in their own libraries and only exposing certain methods (this is the 'encapsulation' part of OOP) you get practice developing api's without the stress/stigma of using OOP. You can present it to your coworkers simply as "bringing a little more structure and organization to our applications." Hopefully over time the thinking will shift from, "I need to do <something> and this AE has that data, so I'll just add another action to it." to "I need to do <something> and this AE has that data, but does this new action make sense in the context of it's api?" I wouldn't advise anyone to start learning OOP by designing a large reuse library. You certainly can (I did) and you'll learn a lot, but don't expect to be successful (I wasn't.) I wouldn't even advise anyone to start learning OOP by designing classes they expect to reuse in another application. Try it on a single application. Forget about inheritance, focus on encapsulation. If you are in an environment where you have multiple developers working on the same application, the most immediate payoff is more structured code that is easier to understand and easier to predict how it will react to changes. Don't get me wrong, inheritance is important and useful. Sometimes it can make an otherwise difficult change a piece of cake. I've seen programmers implement some really cool things using inheritance. For those who are venturing down the OOP path on their own, I think they'll get more bang for their buck by concentrating on encapsulation and good api development. On the other hand, if you have an experienced OOP developer handy who can help guide the design, jump on the bus and enjoy the ride. That's very cool. I hadn't seen it. I know FG's are very fast so if raw speed is a concern that might be a better route to go than holding the data in the class cluster. I do wonder what the performance change would be if the AE (but not the FG) were refactored into a class. -
Recently there have been a few discussions about action engines as they relate to specific applications (here and here) but the topic is broad enough that I felt it deserves it's own thread. Ben, you left out part of the definition... AE= Action Engine >>> loosly a Functional Global Variable and a commonly used Labview anti-pattern. <---Disclaimer---> Ben, these comments are not directed at you. I know you're a CLA and already understand much of what I say below. I'm simply using your comment as a springboard to jump on my soapbox. For those that like to use AE's, please don't take this as a personal attack; I'm referring to the AE as a programming construct and not passing judgement on anyone who uses them. Also, this is very much an opinion and is based on my own observations. My Labview experience is somewhat narrow, centering around single computer desktop applications, so there may be situations where an AE is the best solution. I don't claim to know all... </---Disclaimer---> Simple functional globals have a place. Any time a functional global crosses the line into an action engine I'm looking to replace it with a class. Using an AE may solve the immediate problem sooner, but it also imposes more constraints on the unknown, future modifications. You are painting yourself into a corner and the longer you stick with it the harder it is to get out. To expand a bit on what I said about layering api's here, when you create an action engine you are creating a chunk of data with associated actions that apply only to that chunk of data. Those actions define the api your AE supports. Now suppose the AE grows to the point where you want to shift some internal behavior off to a sub vi. Like it or not, that sub vi has just become part of the public api. You cannot change that sub vi without considering how it will affect code in countless other places. "But," you say, "I know that sub vi isn't supposed to be used anywhere but in the AE!" Easy to say now. What will you do when, in order to fill a change request, you have the choice between a quick fix by using the sub vi someplace you didn't intend 'just this once,' or a more time consuming fix by changing the behavior of the AE itself? Furthermore, when somebody else works on that code, how do you convey to them the difference between, and enforce the correct usage of, the public api versus the private api? Every VI we write is essentially a mini api. Every action a public api exposes places constraints on how you can change that api in the future. AQ recently talked about the problems associated with publicly accessable vis in vi.lib. Those VIs are part of the public api even though they were not intended to be used by the public simply because they can be used anywhere. Trying to add functionality to the intended public api without changing the behavior of the unintended public api is extremely difficult, and sometimes impossible. Good api design involves exposing, or making available to the public, only what is necessary and no more. Applications that don't use the classes or libraries are programmed with, in effect, one giant public api. Any vi can (and probably will) be used anywhere, which often results in a very complex vi hierarchy and interactions that are difficult to disentangle. We try to manage that complexity by organzing our projects into subfolders and using naming conventions. Unfortunately naming conventions and disk hierarchies cannot enforce the intended usage. For that we need classes and libraries. </soapbox>
-
[--- Moved comments to a different thread ---]
-
Grats! I might have to pick your brain a bit... He has to say that... people at work read the forums...
-
I presume you are considering an AnInst class with a name property and each object instance will have a name that refers to a different instrument? Is your dll thread safe? What happens if one object attempts to make a dll call while the other object has a call pending? Many dll's get upset when asked to do more than one thing at a time. If yours is not thread safe, you'll probably need to figure out a way to throttle your calls. (DLL threading is a little off the edge of my exprience, so I may be wrong.) I have a few different ideas of how to go about doing that... I'd have to prototype them to see how well they work. At it's core, you are correct. (Though I wouldn't bother with setting a maximum length.) Whether there is any additional functionality required depends on your setup. How do you want to handle situations where a reaction object is waiting for a free instrument? Does the reaction object execution hang? Does the manager put the request in a queue and return control to the reaction object so it can continue to monitor itself? Fair enough, though personally I would still create a by-ref driver object first and drop that in a singleton wrapper simply to separate the core driver functionality from the multi-threaded singleton functionality. In my own code I tend to favor highly layered designs. I read an api design book recently that I thought had excellent advice: "Make the common things easy and the difficult things possible." Appropriately layering the api actually makes them easier to develop, easier to test, and easier for users to understand. The hardest part of it is convincing yourself that VIs don't have to do lots of stuff inside them to be valid--it's okay to sometimes create VIs that are nothing more than a wrapper for another VI. Lucky dog. Our (internal) customers still aren't sure what they want, and we're 96% finished. No problem at all. If I don't spout off about my ideas then I can't learn from others correcting me.
-
This is a pure guess on my part... I suspect the increased autocorrelation time you are seeing in the first configuration also includes time needed to copy the waveform data. What happens if you cut the waveform size in half? Does the time difference also get cut in half?
-
There is no universal "best" way that I'm aware of. There are ways to provide more flexibility to accomodate changes at the expense of writing a little more code and adding more abstraction layers. What's "best" depends entirely on the requirements for your application. (Of course, there are bad implementations that require more code and provide less flexibility, but we'll skip those. ) The answer partly depends on the framework of your existing code and how much refactoring you and management can tolerate. Given a blank sheet of paper and knowing nothing about your actual physical system, my first inclination is to have each reaction object running in it's own thread, either by using parallel loops or perhaps by implementing them as active objects. Then I'd implement an AnalysisInstrumentManager class that can hold n number AnalysisInstrument objects, one for each actual instrument in the system. The AnInstMgr class would also run in it's own thread. When a reaction needs access to an analysis instrument, it requests one from the AnInstMgr object. If one is available, AnInstMgr sends the AnInst object that refers to the free instrument to the requesting reaction object, which uses it and returns it to AnInstMgr when it's no longer needed. This design gives you a lot of flexibility in the number of of analysis instruments you have in your system, frees you from having to create logic to decide which AnInst object to use, and avoids the danger inherent in using singletons. An unspoken assumption I've made in the above paragraph is that the driver for the analysis instruments supports multiple driver instances for multiple devices. For example, if you are communicating with the instruments via string commands over independent serial ports, no problem. If you have to go through the vendor's dll to talk to the device, this design might not work. Most, if not all, dll's I've worked with handle connections to multiple devices internally within a single instance of the dll. If that's the case I believe you'd need to change the design around a bit, though this is an area I don't have much experience in. I mentioned private globals and private functional globals in my previous post. Either of those would work if you don't want to connect the class methods to a wire carrying the actual object. (Note that unless you ditch dynamic dispatching you'll still have to wire a class constant to the method.) However, in general I don't think it's a good idea to make your lowest level instrument driver class a singleton. Keep it a regular by-val class. Who knows how you'll want to use that driver class in the future? It's easy to wrap a by-val object in another class to give it singleton behavior when you absolutely must have it, but I don't know of any way to get by-val behavior from a singleton object. I would probably also approach this component from the standpoint of a regular by-val object that is distributed by an object manager class to a requesting client object. Here's a thought that finally coalesced as I was sitting here thinking about your problem: When using a singleton (or action engine) for your instrument driver class you have to provide lock/unlock methods for those times when a reaction object has to execute multiple steps without interruption. There's always the chance that someone forgot to (or didn't know they were supposed to) lock the resource. This kind of bug could easily go undiscovered for a long time. By distributing the by-val instrument object to the client as opposed to having the client make calls into a singleton class, you've guaranteed by design that no other reaction object will be able to use that instrument until the first one is finished with it. That design decision just eliminated an entire class of potential bugs. My sense is that this is a pretty significant advantage but I'll have to ponder this for a while before I add it to my list of best pretty good practices. "What if..." is an excellent question and is not asked nearly enough! To expand on that thought, what happens if one of them breaks and has to be sent in for repair? Will your software still work with a single instrument? What happens if the instrument is irrepairable and you replace it with a different model? What happens when the powers that be decide that 8 concurrent chemical reactions and two liquid handling systems are needed? You probably don't need to build all that flexibility into the system, but you should have a reasonably good idea of how much effort it will take if those kinds of requests come down the pipe and communicate the design limitations to your customers.
-
Ahh, I missed the part about using it across a network. I have no idea if it is possible to implement a singleton across PCs as I've never done any networking code in Labview. That said, I suspect that it is possible using the proxy pattern. The idea behind the proxy pattern is that every computer (except one) has a local object they think is the "real" object they are communicating with. However, the real object is actually located on a different computer on the network. The local object is the "proxy" for the real object, accepting all inputs and forwarding them to the real object, and taking the results from the real object and giving them back to the client. All the ugly networking code is hidden inside the proxy instead of the application on the local computer. Applying a singleton to the proxy pattern (I think) is simply a matter of making the real object a singleton. (See below.) All the calls to the proxy will be routed to the real object, which is the central maintainer of the class data. Note this is just an idea... I have not tried to implement anything like this nor have a spent time investigating the implementation details in Labview. Any input from LOOPies who have experience with OO network code in Labview is appreciated. Let's ignore the singleton part of the requirement for a moment and focus on the issue of concurrently executing class methods. There's nothing in LV classes, singleton or otherwise, that prevents two methods in the same class from being executed concurrently. The best you can do as an inherent part of class behavior is pause the execution of one method until the other one finishes. Semaphores are the most obvious way of doing this. Create a semaphore in your class' Init method and store it as private data. When a class method is called the first thing it does is obtain the semaphore, then it releases the semaphore as its last action before exiting. Now, although multiple methods may be entered simultaneously, the guts of the methods are executed sequentially. Now to the question of singletons. What exactly is a singleton? It is a class in which all the data is shared between all of the class' instantiated objects. In other words, the class data is globally available to any class method (though that does not mean the class data is globally available to any vi in the application.) How can we accomplish that? The two most obvious ways are to maintain the class data in a private global (as suggested by AQ recently) or a private functional global. The implementations differ slightly but I don't see a clear advantage to either one. I prefer the global simply because implementing a global is easier than implementing a FG. Note, however, that either solution will require you to implement semaphores if you have methods that do read-modify-write operations as opposed to simple set/get operations. There are probably performance considerations to take into account if you need to squeeze every last ounce of speed out of your system, but in my environment with pre-emptive operating systems and small data sets that's not a concern. I don't think a DVR is the right solution to this problem. Step back for a moment and consider the different types of class data. There is: By Value Data - Branching a wire and dropping a new object on the bd always create new instances of the data. By Reference Data - Branching a wire refers to the same data, but dropping a new object creates a new instance of the data. Global Data - Branching a wire and dropping a new object always refer to the same data. A class can have any combination of By Value, By Ref, and Global data depending on the needs of that class and it's intended behavior. By Val data is easy; it's what everyone is used to. Global data is pretty easy too. (See above.) By Ref data traditionally has been implemented by creating an unnamed queue during the class Init method, putting the desired data on the queue, and storing the queue reference as part of the class cluster. The 2009 DVR provides another way to implement By Ref data in classes. Using a DVR in the way you describe creates behavior similar to a singleton, as long as all the wires are connected correctly. As soon as some unsuspecting developer drops a class constant with the expectation that it is a singleton, the hilarity begins. IMO, if you need your class to be a singleton, make your class a singleton. It will avoid problems down the road. Now that I've contributed heavily to the level of hot air in the room, the question posed by AQ is valid and important. Why do you think you need a singleton? If it's mainly a matter of convenience, I'd think long about it. I believe there are valid reasons to implement singletons, but I don't think those reasons are commonly encountered. --------------------- As always, the opinions expressed here are solely my own and subject to change at any time. [EDIT - Having reread your original post, I have some thoughts (who would have thought!?) but no time to post them right now.]
-
Addressing this comment purely from a philosophical standpoint, you bear much responsibility if the software you write (intentionally or unintentionally) biases the data one way or the other--moreso if you lock it up and don't let anyone else review it.
-
Thanks for the information Jim and Bob. (JimBob? ) This is something I did not know and represents another way to protect citizens from overbearing governments. It bothers me that the legal establishment tries to hide this from jurors. I believe the "nullification" part of the term refers to nullifying the (unfair) law the person has violated, not nullifying the jury. I can see why a prosecutor would try to exclude a potential juror who is aware of jury nullification, but I'd be surprised if the prosecutor would ask the question during jury selection. Why bring up an issue that most people don't know about? They might start asking questions... I don't see how someone could be held in contempt by acquiting in those cases though.
-
What a conundrum... the desire to learn about LV Architectures weighs heavily against my hesitation of trusting Google Everywhere. Decisions... decisions...
-
I agree 15 kg is a pretty heavy payload. Our robot is fairly beefy and IIRC has a 5 kg payload. One question to ask the vendors is if they can support a heavier payload if you use slower speeds with less acceleration. I'm a little fuzzy on this... perhaps someone else can chime in. I believe the motor driver circuits are current limited. If the current exceeds a certain amount the robot halts with a fault. In the motor, the amount of current is proportional to the torque the motor provides. Since, Torque = mass * angular acceleration it holds that reducing the speed and acceleration of the robot will increase your payload. These robots can move very fast. If speed is not a primary concern for your application you may have a lot of room to trade that speed for payload. Also worth noting is that arm-type robot vendors do not typically quote accuracy specifications since the error varies depending on the position of all the joints. They do quote repeatability though, and these robots are very good at going to a position you've already defined the joint positions for. What's this mean in real terms? If you define a known position in the robot software and then tell it to move 40 cm in *this* direction, how close you get to 40 cm is undefined. (Though still very good. I'd swag worst case accuracy is < 1mm, though I have not specifically investigated it.) However, if you manually jog the robot into the desired position 40 cm away, visually confirm it is in the correct location, and save *that* position data in your software as the second point, it will hit it dead on every time.
-
Hope you don't mind the late response... been a little pressed for time lately. The government restricts our right to own and operate certain kinds of arms, but not arms in general. The government does not, and should not IMO, restrict a citizen's general right to own firearms or force citizens to "prequalify" (via certification) before we are allowed to exercise that right. Oddly enough, early intent was to allow citizens to own weapons suitable for military service. Now firearms that merely look like military weapons are prime targets for the gun-ban crowd. Grammatically speaking there are several way to interpret this. To paraphrase the arguments: The first phrase (a well regulated milita...) is the necessary precondition for the second phrase (the right to keep and bear arms...) In other words, the first phrase defines all the valid justifications for the right. When the first phrase is no longer applicable the right becomes invalid. This is a very cause-and-effect interpretation and I believe the one Bob espouses. The first phrase describes one, but not all, of the necessary preconditions for the right. Invalidating the first phrase does not invalidate the other (unnamed) justifications for the right, thus it remains intact. However, if it were possible to discover all the necessary preconditions and invalid them, the right becomes invalid. The first phrase does not describe a necessary precondition at all. It merely gives an example of why the right is included. If it were possible to invalidate all the possible examples of why the right is included, the right itself still remains intact. Revoking the right would require an amendment. My biases show through... interpretation 3 makes the most sense to me. Why? One reason is that, contrary to popular opinion, the second amedment doesn't actually confer citizens with any rights whatsoever. It prevents the government from infringing on a right that has already been presupposed. However, all of the interpretations have supporters and valid explanations for why their interpretation is the correct one. If people way smarter than me can't agree on the correct interpretation I'm fairly certain I don't understand all the nuances involved. I'm not a historian or constitutional scholar, but I don't believe this is correct. The revolutionary war ended in 1783. The bill of right was finalized in 1789 and ratified in 1791. Fighting the redcoats wasn't the issue. The debate at the time was over the division of power between the federal government and state governments. Some felt there was a need for a national army; others worried about ceeding too much power to the federal government. If you look at the bill of rights as a whole, it addresses specific activities an oppressive government might take in an attempt to control its citizens. It, as a whole, acts as a way to limit the power of the federal government and protect the citizens from oppression. It's not about protecting the nation (which at the time was more of a loose collection of independent states) from foreign invaders. Not true. I couldn't find any Supreme Court cases which even address the question of whether a militia for the common defense is a necessary precondition for the right until the recent DC v. Heller case. (In which the Supreme Court rejected interpretation #1 above.) Nearly all of the cases addressed the issue of whether the second amendment applies only to the federal government or if the states and individuals could violate another person's second amendment rights. Early cases held that the second amendment restrictions apply only to the federal government and not to states or individuals. (The first amendment was also limited to the federal government; however, the other amendments apparently were deemed to apply to the states as well as the federal government. Odd.) Modern (post 1890) interpretations of incorporation imply all the amendments apply to the states as well as the federal government. The question of incorporating the second amendment has not been specifically addressed yet. Not a convincing one. If the second amendment is indeed obsolete and no longer needed, the correct way to address it is by constitutional amendment, not by creating FUD. (I'm looking at the gun ban crowd and the Brady Campaign, not at you.) I fully agree with you here. Perhaps where we disagree is the point at which my right to own firearms infringes on your right to life, liberty, and the pursuit of happiness. I believe that my right to own, operate, and even carry a firearm does not, in and of itself, infringe on your right to happiness. How I use my firearms may infringe on your rights. If I'm in my backyard at 6 am shooting at crows with a shotgun, that infringes on your right. If I'm using a deer rifle to pick squirrels off my back fence without regard to the mall on the other side of the fence, that infringes on your right. If I get angry and shoot you, that infringes on your right. All of those things are already illegal and carry appropriate penalties. Implementing laws to prevent the possibility of infringing on another's right is a very dangerous game--one that is likely to backfire. (Incidentally, this is my main issue with the left in general. They seem to espouse legislation to insulate people from having anything "bad" happen to them. They seem to believe that through the proper legislation they can create an idyllic society where everyone is happy all the time. This is, I believe, a huge philosophical fallacy.)