Jump to content

Grampa_of_Oliva_n_Eden

Members
  • Posts

    2,767
  • Joined

  • Last visited

  • Days Won

    17

Posts posted by Grampa_of_Oliva_n_Eden

  1. Stepping in to the Wayback Machine...

    Years ago, on the Drak-side forum, before LAVA was molten...

    Jim Kring and all of the LAVAites used to answer questions on the Dark-side. Back then there was the potetial of each of us starting new threads not for Q & A but to discuss. Most people never used that function but there was on instance where we were anticipating the new certification "CLA". In that thread we were discussing what WE thought an architect should know. Jim posted a wonderful outline that covered LV and the environment that I printed out and posted to my cubicle wall to serve as a study outline. it was not long before that thread was deleted. I read it as "we don't want YOU deciding what makes a CLA, thank you and please ignore the man behind the curtain."

    Whta tthe CLA certification became was what NI thought an architect should know. How did they answer that question? Well they looked at what they taught in the classes and based the testing on those courses.

    If you think NI teaches you everything you need to know about LV that resoning can make sense. I don't think they do so I am of a different opinion.

    So I want to say that Daklu is talking at his level (well above mine) and that he may indeed be the "type" of a CLOOPA and I want to ask for others opinions related to this idea.

    Craig Larman in his book "Applying UML and Patterns" stated that a OOP architect has a understanding of 50 or more patterns and can apply them as warrented (paraphrased of course).

    Q:

    What are the qualifications that distinguish a Certified LabVIEW OOP Architect?

    Ben

    PS: NI can't pull this thread. :rolleyes:

  2. I have never been to NI week, so I have to ask some potentially silly questions. What exactly is the conference section of NI week? Is it worth it for someone that doesn't know much about Labview? Here are the registration choices I am looking at:

    Full Conference: Includes three-day conference and exhibition admission, meals, exhibition receptions, and evening events.

    Expo Plus Pass: Includes access to keynotes and exhibition hall plus all meals, receptions, and evening events.

    Exhibition Pass: Includes access to keynotes and exhibition hall only.

    The Full Conference pass is about $900 more than the Expo Plus Pass, so I need to think wisely on if I should get access to the conference. I'm guessing it is a lot of info sessions. I can attend all three days. Would I need all three days if I were to get the Expo or Exhibition pass (i.e. are there enough keynotes and exhibition booths to keep me from being bored)? Are the receptions and evening events worth the extra $300? Networking is great, but as far as I know at the moment, I would be casually networking.

    If it helps, my goals are to learn more about Labview from the perspective of someone having a couple months of experience with it as well as learn how Labview goes from prototype to production level. I will not need to personally develop anything that goes into production, though I will need Labview for prototyping and testing. Any advice on choosing the right pass would be greatly welcomed!

    *Just saw that there's a Sessions Only pass on the Preliminary NI week program (includes sessions, keynotes, etc. but no conference). So now I am further confused on what the actual conference is!

    You will probalby interested in the various sessions and hands-on stuff. You can only get that with the Full.

    Lacking that you wil sit through a lot of sales pitches, eat well and party well and wander around in the convention center.

    Ben

  3. Antoine,

    Thanks for the VI worshippy.gif. My problem is, I dont want to create an array of Indicators I am using on the front panel. I want my VI to search for it automatically and output the value.

    AFUN1(input)--->{SubVI contains a front panel element called AFUN1}--->(output) value of AFUN1 thumbup1.gif

    -Sharon wacko.gif

    Hi Sharon,

    Could you please stop and tell us the big picture you are working on?

    For thread to go this far without anyone getting close is a sign that the questioner is asking for something that none of has imagined.

    But now that I brought up imagination... I have this crazy image of you trying to use the controls as if they were a look-up table or similar. If taht is really wahat you want then you may want to read through this Nugget I posted on the Dark-Side since it gets you stated uisng control ref to naviagete into cluster FP's etc.

    http://forums.ni.com/t5/LabVIEW/Nugget-Using-control-references/m-p/570756

    Otherwise, add me to list of those that don't know where to start.

    Ben

  4. So it seems the summary is to use a higher version if you can, but 8.2.1 is better than 8.2

    Also, here is the versions to use that relate to your original questions:

    - dynamic dispatch recursion (as in reentrant dynamic VIs - LabVIEW 8.5)

    - properties (as in property nodes - LabVIEW 2010)

    - friends and community scope (LabVIEW 2009)

    I believe the error reporting was less than ideal. More recent version make it easier to figure out what is broken.

    Was that the version where the parent class would break if one of the children was broken?

    Ben

  5. From my boss...

    "

    Once upon a time, in a kingdom not far from here, a king summoned two

    of his advisors for a test. He showed them both a shiny metal box

    with two slots in the top, a control knob, and a lever. "What do

    you think this is?"

    One advisor, an Electrical Engineer, answered first. "It is a

    toaster," he said. The king asked, "How would you design an embedded

    computer for it?" The advisor: "Using a four-bit microcontroller, I

    would write a simple program that reads the darkness knob and

    quantifies its position to one of 16 shades of darkness, from snow

    white to coal black. The program would use that darkness level as

    the index to a 16-element table of initial timer values. Then it would

    turn on the heating elements and start the timer with the initial

    value selected from the table. At the end of the time delay, it

    would turn off the heat and pop up the toast. Come back next week, and

    I'll show you a working prototype."

    The second advisor, a software developer, immediately recognized the

    danger of such short-sighted thinking. He said, "Toasters don't

    just turn bread into toast, they are also used to warm frozen waffles.

    What you see before you is really a breakfast food cooker. As the

    subjects of your kingdom become more sophisticated, they will demand

    more capabilities. They will need a breakfast food cooker that can

    also cook sausage, fry bacon, and make scrambled eggs. A toaster

    that only makes toast will soon be obsolete. If we don't look to the

    future, we will have to completely redesign the toaster in just a

    few years."

    "With this in mind, we can formulate a more intelligent solution to

    the problem. First, create a class of breakfast foods.

    Specialize this class into subclasses: grains, pork, and poultry.

    The specialization process should be repeated with grains divided into

    toast, muffins, pancakes, and waffles; pork divided into sausage, links,

    and bacon; and poultry divided into scrambled eggs, hard- boiled eggs,

    poached eggs, fried eggs, and various omelette classes."

    "The ham and cheese omelette class is worth special attention because

    it must inherit characteristics from the pork, dairy, and poultry

    classes. Thus, we see that the problem cannot be properly solved

    without multiple inheritance. At run time, the program must create

    the proper object and send a message to the object that says, 'Cook

    yourself.' The semantics of this message depend, of course, on the

    kind of object, so they have a different meaning to a piece of toast

    than to scrambled eggs."

    "Reviewing the process so far, we see that the analysis phase has

    revealed that the primary requirement is to cook any kind of

    breakfast food. In the design phase, we have discovered some derived

    requirements. Specifically, we need an object-oriented language

    with multiple inheritance. Of course, users don't want the eggs to get

    cold while the bacon is frying, so concurrent processing is

    required, too."

    "We must not forget the user interface. The lever that lowers the

    food lacks versatility, and the darkness knob is confusing. Users

    won't buy the product unless it has a user-friendly, graphical

    interface. When the breakfast cooker is plugged in, users should

    see a cowboy boot on the screen. Users click on it, and the message

    'Booting UNIX v.8.3' appears on the screen. (UNIX 8.3 should be out

    by the time the product gets to the market.) Users can pull down a

    menu and click on the foods they want to cook."

    "Having made the wise decision of specifying the software first in

    the design phase, all that remains is to pick an adequate hardware

    platform for the implementation phase. An Intel Pentium with 48MB

    of memory, a 1.2GB hard disk, and a SVGA monitor should be sufficient.

    If you select a multitasking, object oriented language that supports

    multiple inheritance and has a built-in GUI, writing the program

    will be a snap."

    The king wisely had the software developer beheaded, and they all

    lived happily ever after.

    "

    Which appears to an mod of this version

    I have to wonder if things would have turn out differently if the software developer had not insisted on multipl inheritance.

    Ben

  6. Yes, that is typically what they say and as a general rule of thumb, they are right. Programmers not trained in software engineer or familiar with OO thinking often create tightly coupled applications. What they should say is, "unnecessary coupling should be avoided." Too much coupling makes it hard to change the software. What they should also say is, "unnecessary decoupling should be avoided." Too much decoupling makes the code harder to follow and may impact performance.

    How much decoupling is "enough?" If you can easily make the changes you want to make it is enough.

    Thank you.

    I'll quote one of my buddies grandmother

    "All things in moderation, including moderation."

    Ben

  7. I don't know where you are going with this good vs bad thing. Decoupled code is not necessarily good and strongly coupled code is not necessarily bad. No one is trying to make those kind of judgments here. What we are trying to do is define what coupling is and what the benefits and pitfalls are.

    Most of the discussion has been surrounding what decoupling means. In that area, there has been some disagreement but I think we have come to a reasonable conclusion.

    For me, this has been helpful because I can now look at my designs and ask myself: "will decoupling offer any benefits to this project?" and "what steps will I need to take to decouple the UI from the rest of the code".

    Right now I have a strongly coupled design that I would like to change to a client-server architecture in the future, decoupling the UI from the functional code. From this discussion, I know that the best way to approach that is to define an API for the two to communicate that can be passed over a network. Now the fun part starts where I need to figure out how to best achieve that with the tools that LabVIEW offers.

    -John

    I don't know where I am going myself.

    From what I have read on OOP Coupling is something to be avoided (i'll let let experts say why). So if Coupling should be avoided then de-coupling appears to be a goal that I should work toward in my designs.

    But a a "sanity check" alram goes off when I try to fit the idea of complete decoupling and performance into my head at the same time since there seems to be a contradiction.

    Maybe its just me being paranoid (again).

    Please don't take offence in my reply. I am just thinking out loud and trying to learn as I go.

    Ben

  8. hi,here

    I want to use labview 8.6 3D Parametric graph.vi painting two surfaces,but there is a difficult about input plot ID ,how to use the plot ID? i can't find out detailed information about it .

    thank you!

    maggie

    Go to the NI Discusion forumsand search on the phrase "Ben 3D" and you will many threads talking about using the 3D graph with many examples of using the parametric plots.

    Take care,

    Ben

  9. I think this sums it up for me. Especially the bit about a defined interface. That is what I was going for. I would only add that the interface should be able to cross any boundary (interprocess, inter-application, internal network or internet) as dictated by the needs of the user.

    So what about those applications that require more throughput to the GUI than can be maintained using your favorite flavor of network interface?

    I often have to smooth out the data path to the GUI from the data source in order to keep up with the data. Including overhaed to realize the connection without direct connects would impact performance to the point where the hardware could not keep up. So if all "good" designs have a decoupled UI and a decoupled UI has to have cross platform cpabilities, then I suppose I will have to settle for specializing in developing bad code.

    I will not add that to my brag-list but it does keep the family fed.

    And if we run with that twisted idea, then sometime in the future we should expect both the CPUs and the networks to improve their perfromance, in which case those cross platform apps would be possible. So then we have code that was previous perfect but unaccepatble change into perfect and aceptable... Which really screws with my head since what is good vs bad should not change with time, but that is probably the old-foggy in me speaking.

    :beer_mug:

    Ben

  10. Amen. :star: You've greatly simplified what I was trying to say. (Wish I had a full-time copy editor to review my posts before I submit them...)

    Extending that thought a bit, since "decoupled" has no concrete, objective definition it follows that any assertions that a particular component is "coupled" or "decoupled" are based on an arbitrary set of requirements. Unless those requirements are identified and agreed upon it's unlikely a group of people will reach a concensus.

    Soooo.....

    This entire thread has no possilbe conclusion and is just an example of being hearded by a Cat?

    :rolleyes:

    Ben

  11. When you start getting real desperate... if you can, put 8.6 on the old computer, recompile, and see if your app still runs. If it does the problem might be the new computer.

    Cat called my attention to a subtlty I missed.

    Since the client is now running LV 8.6 and the target is running 8.2 and the changes were added between those two versions... I am suspecting the target will need updated.

    check that!

    My associates tell me they are serving 8.2 on cFP to 2009 apps on Windows.

    Not sure what to say now.

    Ben

  12. Okay, there's one I'm not familiar with. What's WAG?

    Just to make sure I understand this ('cause I've *cough* occasionally been known to be wrong)... If I have TopLevel.vi which calls SubA.vi which calls SubB.vi, and I put a request deallocation in SubB.vi, then TopLevel.vi has to go idle before the deallocation occurs?

    Tim

    Wild Arse Guess

    Yes, a has to go idle.

    If B was not part of A's hierachy and was loaded using Open VI ref then run using invoke node run... then when it complete the deallocate wold have an effect.

    Ben

  13. Lots of experience. For some reason I keep trying to use it, even tho it rarely does anything. As Ben says, the whole calling chain has to go away before it works.

    You're using LV 2010, right? It should be already Large Address Aware, but if you're using 8.6 take a look here.

    Yes, contiguous memory is the problem with large data sets, particularily when you're trying to plot them.

    Before I pull in a large amount of data, I read the available system memory, divide it in half, and then decimate my data to fit in that space. Yes, it's a WAG that even that will be enough memory (not to mention really frustrating to have lots of memory out there and not be able to use it), but it's been working pretty well so far.

    Sound like the old DOS days when we had to unload the first half of the program before we could run the the second.

    Ben

  14. I am using the VI Server to access controls on a vi running on a remote computer. This interaction is intiated with a call to the Open Appl Ref VI, it returns an error 66. Independent of what we do to the VI Server Configuration and Access List (Tools->Options), the result is that same. I can ping the remote computer. I can open the port (3363) with a telnet command from the command window ( telnet 192.168.10.15 3363).

    All this started when I ported the main LabVIEW application (LV82) to a new computer with LV86. The main LV82 application on the older computer can talk to the remote computer (running LV82) just fine.

    We have read some of the info in the NI Forums on server.tcp.acl in the LabVIEW ini file and have changed and played with it every way possible but can not resolve the problem. (http://forums.ni.com...2-1/td-p/580394)

    We can run the LabVIEW VI Server example on the remote computer to talk to the main computer but can't get things to work the other way???

    After verify the target machines ACL and otheer VI server stuff is correct in the ini used on the target then check the port used for TCP and verify the firewasll has that port open.

    That is all that comes to mind off hand.

    Ben

  15. I shall be interested in how this gets answered. I've put it in code where I knew there was a large amount of memory usage, but I've not seen performance improvements in how I've used them. A contractor we had in put them in every VI he created, but that doesn't seem right either.

    I've had Windows XP terminate a program (happens when program uses >= 2 GB) before LabVIEW 8.0 was unable to gather more memory.

    Tim

    i believe in old versions, the deallocate would try to deallocate. In recent vesions, the VI hierachy has to go idle. That is why I mentioned the dynamic loading case since they will go idel when they finish.

    I have busted through the 2Gig limit.

    There is a switch for XP to make the OS aware of the extra memory.

    That message is saying there was not a large enough block of contiguous memory when requested. I got around that by only using a bunch of small buffers. One possilbe contruct would be to use an array of queue refs with only a sinlge element in each queue. As long as the OS can finf a slot big enough for the queue element it should work.

    Ben

  16. Do any of you have experience with the Request Deallocation function? I'm wondering what proper use is.

    One of my beta testers for an application I'm writing occasionally reports out of memory errors. When the application instance reports this, it has usually climbed up to about 800-900 MB of memory.

    I guess I have a few questions with regards to LabVIEW and memory as I've never created any LabVIEW applications that manage large data spaces. Is there a hard limit on what LabVIEW 32-bit is capable of? Am I looking at a 3 GB or so ceiling? I'd never expect to have more than maybe 500 MB of data resident in memory at one time for extreme cases, but the application will be managing gigabytes worth of data space which is cached to disk and can be called up on demand for display purposes.

    I suspect that what I'm really running into is out of continuous memory required for allocations as there is ample memory available on the system. Does requesting deallocation help keep memory space unfragmented or is that a function of the OS which LabVIEW essentially has no control over?

    -m

    As I understand it...

    It can be used for those dynamically loaded number crunching memory pig plug-ins. When you load one of thos monster I can allocate a lot of memory and as long as it is resident, the memory stays allocated. By using the "request dealloc..." at the tail end of the plug-ins run, it can try to give ack what was allocated before it was run.

    There were other uses in the old days but gradually they have been down graded (don't work as well as it used to do).

    I think there as also a "Lazydeallocation" switch but I don't remeber if that was something aded or taken away.

    Ben

  17. "Thought" isn't necessarily part of the process. It *can* be, but it doesn't *have* to be.

    Incidentally, I discovered there's a name for an idea that is very similar to what I'm describing, though I think I take a slightly different approach to what the article describes. It's called motivational hedonism (MH.) Also, I don't think MH can be used to predict an individuals behavior given an arbitrary set of circumstances. In principle it could, but I can't even get a firm handle on my own pleasure/pain equation, much less quantify anyone else's. I suspect MH is better applied to analyzing why an action is taken after it has been done. Finally, I believe MH is correct in that it is consistent and can be used to explain any action, but I don't think it is necessarily the only correct explanation. It's just one way of viewing the reasons why an action is taken.

    On an purely intelectual level, I could follow that line of reasoning right up until I think of my Father and my brother. I would have to toss a lot of memories to get MH to fit into my brain without conflict.

    I am firming up on a couple of ideas from reading and thinking and those ideas are not off much from of the stuff I read coming from Plato.

    Its seems we accept that there is a difference between what is percieved to be a hero what actually is a hero. Looking to the points in MH, there are two things we can look for, the motivation of the act and the results of the act. Since few of us can "search hearts" we have no true insight into a person's motivation (for some of us, we may not recognize what our motivations actaully are) and are limited to evaluate observable actions and results.

    I don't have a point to make, so I'll go find some real work to do now.

    Ben

  18. (Lots of thinking out loud here... read this as ideas I'm floating, not a viewpoint I'm asserting is correct.)

    ...

    Nobody acts unless or until they subjectively evaluate the risk to be low enough. Taking action is a physical manifestation that--for that person--the reward outweighs the risk. The reward isn't necessarily a monetary reward or public recognition, though it may be a factor for some people. These are pleasure increasing rewards. Reducing negative feelings such as guilt--pain reduction--is also a reward. When you remove all the extraneous stuff, the reward is simply a higher score on one's personal pleasure/pain scale. Higher than what? Higher than what it was before? No, higher than the pleasure/pain score of any of the other choices available.

    ...

    I am thinking outloud myself and thank you for listening in.

    Discalimer:

    I do not concider myself a hero.

    I have to question the staement about Nobody acts. I was sitting in a park eat lunch (babe watching) when I witnesed a purse snatching. About six blocks latter I found myself asking the question "Hmmm, I wonder if this is a good idea?" Others caught him and tackled him to the ground and I just went back to work.

    I realize that we can ague either side of my motivation but i tell the story now because it was an example where time did not permit being thoughtful and action was taken and only latter concidered.

    Megae-dittos on the proverbs quote!

    Ben

  19. From that point of view, yes, they are both selfish acts. I don't know if removing the anticipated future guilt is enough of a motivation, though, or even a consideration. I don't believe I would feel guilty for not acting on behalf of strangers; I have my own family (read: selfish interests) to look out for. I would certainly act if the risk was low enough (but that either makes me practical or cowardly, certainly not heroic).

    One premise in Atlas Shrugged is that selfishness, although damned by society, is necessary for achievement.

    I recently finished reding Atlas Shrugged and I have to admit that it played a factor in me asking this question and has me questioning myself )Am I doing a Hank Reardon or should I John Galt?).

    I set teh fire fighter example to the side since there are too many "but what ifs" that go with those stories and turn the attntion back on LV.

    Are those that contribute to the "Greater Good" of the LV community by developing and sharing CR code heroes?

    Simialry those that post LV-Wikis etc?

    Does the answer change if the code or articles are latter found to be faulty or lacking ?

    Still doing some soul searching, and greatly appreciate your comments and thoughts.

    Ben

  20. @Yair, I actually thought about that thread when I was writing my post. :)

    I looked up the thread but didn't see any references to not wiring the input terminal. That thread focuses on where the terminal should be placed on the block diagram. Any other ideas where it might be?

    I like to sprinkle my code with debug messages to provide an ordered list of events that occured during execution. It takes a little more time to set up but it's saved me loads of time figuring out what went wrong, especially when if I've got many parallel threads interacting with each other.

    I can't find that reference either so please forget what I said until I can prove it.

    Re:messages

    I have re-use that lets me quickly compose messages that clearly detail how my code was bieing mi-sued and what they need to do to fix it and why they should not be knocking on my cubilce and complaining. The only time that has failed me is when a developer failed to include my code in the build. :frusty:

    Ben

    Ben

  21. With my limited understanding of the compiler, here the exlanaition.

    When you wire both terminals (inside the subVI), the data is unchanged. If I don't wire the terminals in the SubVI, it actually changes whenever an error comes in.

    If I don't wire the SubVI in my benchmark VI, a new buffer must be allocated each iteration.

    Anyhow, I'm also more interested in the functionality aspects on error handling in this discussion.

    Felix

    I think Greg McKaskles explanation from the "Clear as Mud" thread on the dark-side explains the hit about not wiring the input under the section where he talks about the default value having to be supplied when not wired.

    I generally look at the code for obvious errors that could occur either by mis-use by others (or myself) or what would be the follow-up effect, of the code not working and how difficult it would be to diagnose an error based on the error cluster info. For code that touches a lot of stuff for the first time, I will use nest error cluster so that I can clearly diagnose a file I/O error from the DAQ error that could result from a bad config (a file error) or the hardware is shut-off.

    In the early days of UNIX there was no error recovery or logging built into the OS. The philosophy was "well fix the hardware then start the OS." That left a bad mark on me so know I "drop bread crumbs" in my code so that I can nail issue if the they come up.

    But not all of my code is wrapped in error clusters. Number crunching, bit banging etc...

    I appreciate the report about the performance being about the same between through wire vs not.

    even if there was a performance hit that could be meassured, I'd still use error clusters for all but the most demanding performance situations. Anybody can drive a car 100 MPH, but to do it safely is another story.

    I once posted here about the "extra inputs" on the icon connector actually incurring a performance hit that could be measured under the right conditions. Even after learning that fact, I still include extra connectors on the icon, to mkae my life easier, even if ther may never be a need.

    Take care,

    Ben

  22. Results absolutely matter when applying the "hero" label. Suppose while driving the burning tanker away from the gas station it exploded next a school bus full of children? It's doubtful that person would be considered a hero, regardless of his good intentions. At the very least the results of the action must be no worse than the expected results had the action not been taken.

    Intent/motive is crucial when evaluating a person's character, but I don't think they play a significant role in determining heroism.

    Before I take that away as a fixed point... Should i understand that heroism is not a part of a "person's character"?

    Ben

  23. A related observation about popular media, heros and engineers

    In disaster movies like Mad Max and many others, the protaganist often has to rescue someone from the bad guys. The people they have to rescue are often the heroin or the mad scientist (engineer). So rather than being the hero they are saved by the hero.

    Ben

  24. "Brave Man drives burning Petrol Tanker away from Neighbourhood" or "Crazy Man drives burning Petrol Tanker spewing burning gas through Neighbourhood"?

    Is heroism determined by

    the results of taking that actions

    or

    the intent when deciding to take the action

    or

    something else?

    Ben

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.