Jump to content

Barrie

Members
  • Posts

    80
  • Joined

  • Last visited

    Never

Posts posted by Barrie

  1. So how is this book related to LabVIEW? Oh, I guess it's not...

    I'm trying to connect this to Jim Kring's "An unfiltered stream of data flow consciousness" but I'm not having much success. :P

  2. Nice Job!

    I haven't seen this one in many ages.

    The beauty of this puzzle is that it requires no guessing or permuting, it is 100% logic.

    Now, can anyone write a program to solve it, (pure G of course) without brute force permutation?

    :blink:

  3. On another note here...

    How many of you are really finding the Navigation window to be a help in your own work? I'm not talking about refactoring someone else's work, but in initial development or maintenance of your own (or your team's) code?

    I've used it once to see what it was, and that's it!

    I have a pretty strict rule regarding diagram size. Going outside the screen size is a pretty solid indication that I'm either taking the wrong approach or I'm being lazy.

    Thankfully, I have never encountered nightmares like Jack Hamilton has.

    ( I still think he's making them up ) :laugh:

    Cheers!

    Barrie

  4. Generally in terms of optics and such, Laser Focus World is an excellent source for manufacturers and suppliers.

    www.laserfocusworld.com

    Their Laser Focus Buyers Guide has just about everthing you can imagine.

    Good luck

    Barrie

  5. I think one of NI's biggest mistakes, with LV, was to allow diagrams bigger than the screen.

    Think how different code would be, if scroll bars on diagrams had never been invented!

    -WDC

    Indeed!

    Sounds like a good wish list item, if not a hard limit, then at least a warning, perhaps in the VI Analyzer.

    How about a diagram size limit that is password protected and you can't get the password until you pass your ACLD? :D

  6. This question intrigued me. I use the PCT quite a bit and every function except one just passes data and then executes. The only one that returns data is Get Text Rect and it is password protected.

    You can flatten the picture datatype to a string and parse it but as far as I can tell, it just mirrors the last command. This was not an exhaustive test.

    It looks to me like the the PCT is actually a wrapper around a dll. The schema is very similar. If you can schmooze someone at NI to give you a full list of the Opcodes and the parameters passed, there might be a "get pen position" function.

    If I learn more, I'll reply further. I'd like to hear if you make any progress.

    Cheers!

    Barrie

  7. 2) You read too much and assume too much in one single line. My statement , quoted by you, applies equally to all products but your assumption (your preconception and bias?) carries it into your own world. Labview could elegantly works and LabWindows could be a stuff pig or vice versa.

    .........

    However, we also have a short term turn around cycle for quite a few products, which I wonder whether  Labview could fit nicely or not. With my understanding of our organization, I have to make a compromise somewhere to have a single platform, so yes, we do use hammer for everything, provided  80+% of our job is involved with nail. The rest of those things can be flat a bit but that's the penalty we have to accept.

    5774[/snapback]

    Thanks for the clarification. As for reading too much or assuming too much, I must say your original conclusion seemed pretty definitive to me.

    .......

    Potentially, LabVIEW could fit, but that depends as much on the culture of your organization as it does on your skill sets. LabVIEW is an excellent tool for rapid prototyping. That is not to say the final product is all chewing gum and duct tape, as the product has become more sophisticated, it is very worthy of large applications. It also enables you to produce a large amount of eye candy which leads people to assume that the app is 90% completed even though the underlying code may be a long way off. I mention this because some management types see what they want to see, no matter how many times you underscore the total work involved and it can bite you bad. :headbang:

    As a development environment, the graphical approach works very well for me. I find it very intuitive, but I knew how to read a schematic long before I knew how to code, or even run a computer. Similarly, debugging is also very intuitive, but that may be due to my experience with a 'scope and a multimeter. Curiously, I have also found that if a VI draws well, it will probably run well. If my VI devolves into a rat's nest, that's a huge warning that I haven't fully thought through my approach and need to backtrack before I get myself in too deep. I intend to run a poll here regarding people's backgrounds, i.e. hardware/software, but I am waiting for the new version of Invision.

    Some people transition well between graphical and text programming, others definitely do not. Some programmers dismiss LabVIEW as "cute". I have seen otherwise intelligent people tell me that "real" programmers only use C, but that's their problem.

    Besides the obvious interface differences, LabVIEW uses the dataflow paradigm, which is inherently multithreaded. Hyperthreading and multicore/multiprocessors extend this capability naturally. Priorities, scheduling and threads are easily implemented and modified. In other words, it's more than just a pretty face. The sub-vi function encourages modularity, structure and reusability and yet graphically shows that function as in-line code for the big picture. The sub-VI intrinsically provides all the interface for testing the code as stand-alone without the need to add, then delete or comment out test code and printf statements. Compiling and linking is totally transparent, but this may now be the case for text based environments as well. All that said, undisciplined or inexperienced wireworkers can produce spaghetti code in the truest sense of the word, as has been discussed on this forum. :) Like any other tool, you need to know how to use it to reap the benefits.

    As you said very well, it all comes down to compromises and fits.

    I hope this helps,

    Barrie

  8.   It can also be extremely difficult to upgrade in the future if someone makes a project that's their first project and then leaves it be for several years.  The problem is that people usually do that with not enough time and not enough experience and their coding style and archictecture is horrible.  Just because LabVIEW is graphical doesn't mean it's self documenting in the slightest (in my experience, that's the furthest thing from the truth).  So be careful if you're giving this to someone as their first project and you or they decide to use LabVIEW.  Make sure they get trained--send them to the NI training if necessary which is really good, both basic and intermediate.  Make sure their coding style is going to be readable and usable by someone else with LabVIEW experience.  Make sure they're going to use a flexible archictecture which will allow for future upgrades, and more or less, make sure there is someone else to audit their programming.

    5761[/snapback]

    While all of the above can be true and does provide some good advice, it is irrelevant at best and misleading at worst. As I understand it, this thread addresses LabVIEW vs. LabWindows, and the above statements apply to every software project ever attempted, regardless of the language or methodology.

    While I would not be foolish enought to suggest that LabVIEW is self-documenting, there are inherent procedures and structures that are an intrinsic part of the language. This can not be said for "C". With "C" you have more of an opportunity for a "free-for-all" because "C" is not much more than an assembler with delusions of grandeur. The quality of the code is almost entirely dependent on the skill, experience, organization and discipline of the programmer. It also forces the programmer to address issues that, frankly, I don't believe belong in a language in the 21st century.

    Now, before the flame wars start, assembler and low level languages are fine or even ideal, for many applications. I have probably spent as many hours coding assembler as I have LabVIEW. That said, the difference in productivity is astronomical.

    Comments in this thread imply that the choice between the two products is whether to "C" or not to "C", but I see it as a decision of How Much "C" you wish to use. There is nothing to stop you from using "C" inside LabVIEW, and you also have access to many high level functions that allow you to put together a rich application in minimal time. I should also mention that the final execution speeds are comparable.

    The summation: "The only difference is one elegantly works and the other is like a stuffed pig." made me smile because it is a gross oversimplification that shows no real understanding of the differences but it does afford you the opportunity to justify your own preconceptions and biases. As I stated above, to conclude that something that relies on "C" is inherently elegant is nothing short of ludicrous. After all, the quality and productivity of the final code is the goal, isn't it?

    LabVIEW, LabWindows, "C", Java et al are all just tools that could be in your toolbox. I try to keep a good selection for different jobs, but if all you have is a hammer, everything starts to look like a nail.

  9. This file is viewable with NOTEPAD. But it never actually opens. Too large for notepad!

    What about saving to binary files? I've thought of this but never implemented it. Any thoughts?

    I'm sorry for the length of my post. 

    Thanks a lot

    Nick The Greek

    5486[/snapback]

    Don't apologize for the length, most posts contain too little information. :)

    Human readable (ASCII) vs. Machine readable (Binary):

    I started with microprocessors where an entire program could reside comfortably in 4K. I spent hours squeezing every byte out of the program and packed my data as tight as it could be. For the time, the effort was appropriate. (Yes, I'm an old fart) :D

    I now manage an airborne data acquisition system that records a lot of data and I record everything as generic CSV files (tab, space, comma, whatever). I think that the benefit of human readable data far outweighs any space saving acheived by using a binary format.

    My thinking is as follows:

    - Most, if not all LV applications are very low quantity installations, therefore the CPU hardware cost is small relative to the dev. cost.

    - I can buy a 250 GB SATA drive for less than 2 hours of my time and that price will continue to drop. It won't be long before consumers have Terabye+ machines to store bad home videos and make sure they don't miss their favorite irrelevant soap operas.

    - That recorded data can be read by a human, Exel, LV, and just about every other data related program. I only have to write the code once and I don't have to worry about other apps interpreting my binary format.

    - If my requirements are "on the edge" of what is available when I start the project, the requirements will be more than manageable before the product is finished.

    - I'm all for elegance and efficiency, but within what parameters? Even if there was a possibility that an app I was developing would become a high volume item, I would still use LabVIEW first, as it is the best rapid prototyping system that I have found so far. I would establish the proof-of-concept first and then, if and when the numbers become real, I would then look at optimization.

    Does it bug me that a stand-alone system now has more storage than 500 multi-user systems had 30 years ago? Yes, a little, but I'm getting over it.

    I hope my rant helps a little.

    Regards,

    Barrie

  10. Why in the world would anyone use a regular OR or AND gate?  The "trick" you missed might be to just never use them... that's my trick anyway.  The easily configurable compound arithmatic is always easier to use, copy and modify.

    5492[/snapback]

    I think I have to side with Louis here. I still use OR and AND where appropriate. As a graphical language, I want everthing to be as obvious as possible, for that time down the road that you have to re-examine and modify code. The compound arithmetic is not as visually intuitive.

    I do agree, that the system should be smarter and replacing a regular gate should include inheriting the characteristics of the original gate.

    :2cents:

    Barrie

  11. Jim,

    With LabVIEW 7.1, the lvanlys.dll is not 'self contained' as it was previously.  It has dependencies on other DLLs (Math Kernel Libraries) that are installed specificially for the CPU type being used (P3, P4, etc).  So you really need to install the LabVIEW Run-Time Engine to not only get the other DLLs but to get the right one.

    Kennon

    5460[/snapback]

    Hmmm...

    Unless I'm missing something, that would imply that the RTE and/or the .exe must need some machine specific information about what sort of machine you want to deploy the app to, during the build. If you wrap up the .exe and RTE together, how does the appbuilder know what the destination machine is?

    Further, if one makes a single built app for distribution to a number of different machines, that would imply that some of the installs will not work, or not work as well as they could.

    Can anyone clarify this further?

    Regards,

    Barrie

  12. Thank's BArrie.

    No, I'm not change the colour depth.

    Yes, nice, i need to change the zooma factor to adapt the picture in the box.

    I attach the VI.

    The problem not support a gif file and the resize picture don't work fine.

    Thank's in advantage.

    5408[/snapback]

    I think the problem is with the resizing. Resizing a digital image is not a trivial problem and some methods are more sophisticated than others.

    Even equally sophisticated methods produce different results depending on the image content. For example, highly geometric images (such as a VI block diagram) can end up looking terrible, depending on how the image is resampled.

    The LabVIEW zoom (or resizing) function provides no control over what resampling method is used, so you are stuck with what you have.

    I noticed in your VI that the zoom factor you calculate is just about guaranteed to be non-integer. For example, even though you may be able to fit an image on the screen that is 100%, the VI will likely resize it to something like 99.2% or 101.5 %.

    Resizing an image to a "nice" number (such as 1,2,4 or .5 or .25) can improve the situation significantly, again depending on the resampling algorithm. It's a question of trial and error with different values, and is dependent on the image content.

    Some things you might try:

    - Make a test VI where you can easily play with the zoom factor, using a slider and observe the results.

    This will give you feel for how the resampling affects the image at different factors.

    Then, in your VI:

    - After you calculate the ideal zoom factor, coerce the factor to something less granular. For example: Coerce 105.2% down to 100% and coerce 49.1% to 50%

    - If you must be able to see 100% of the image, change the size of the picture control to match the zoom factor, not the other way around.

    - If picture quality is very important, you may be forced to use a third-party program to resample the image, but that can be messy and time-consuming.

    I suspect that LabVIEW will be improving the graphics capabilities in the no-so-distant future, but for now you have to work with what you have. :(

    Hope this helps,

    Barrie

  13. I use the picture control quite a bit and I haven't noticed any problems.

    Could you be more specific when you ay the picture loses quality?

    Some thoughts are:

    -Somehow you are changing colour depth

    or

    -You have a zoom factor that is not 1 (100%)

    It might help if you include a screen shot which shows the picture in both Word and LabVIEW a perhaps attach the VI itself.

    Good luck.

    Barrie

  14. Alright... I'm starting a post to vet what becomes common wisdom once you've been using LabVIEW forever, but which newbies are sometimes uninformed of.  The thought behind this post was that I just discovered something "new" the other day that I have never seen mentioned anywhere and have never stumbled upon before.  I wound up asking myself, "How come I never knew this?"

    So having found myself in the newbie category once again, hopefully people will share their favorite insights here and maybe this thread will become a quick and dirty list of tips, shortcuts and techniques that can help the newcomer quickly become a masterful LabVIEW programmer!  :thumbup:

    What are your favorites?  I'll start with these:

    - My personal favorite is probably Ctrl+Double-Clicking to go straight to the block diagram

    - My second favorite technique is the LV2 functional global

    - The "new" thing I stumbled across the other day was double clicking the font in the icon-editor.  This brings up a dialog to set the font you want if you don't like the default.

    5297[/snapback]

    I'll chip in! In the same category of the icon editor:

    Double-clicking the "Selection Box" selects the entire icon. (+del key =A fast way to delete the Default Icon)

    Double-clicking the "Rectangle Box" puts a rectangle around the icon perimeter.

    Double-clicking the "Filled Rectangle Box" puts a rectangle around the icon perimeter and fills the inside.

    BTW, it wasn't until some months with 7.0 that I discovered the font trick myself, but it was only after I fought with the itty bitty font default that came with 7.0 :oops:

    Barrie

  15. In Labview 6.1, I've been trying to enter an exponential function into a string and cannot figure out which character to use to denote the exponent (I've already tried the ^ key but to no avail).  I know Labview has a method for handling exponents, but the program that I wrote calls for a function in a string.

    5254[/snapback]

    Just enter it the same way LabVIEW displays it in a numeric.

    If I understand what you're asking, it's easily demonstrated by wiring a string control to the Fract/Exp String to number function and looking at the output.

    1.234e2 ( or 1.234E2) results in a numeric value of 123.4

    Barrie

  16. I'm even later to the party but I want to thank Jason for taking the time to not just write about the "bug" but to provide some very good information about the background and philosophy that goes on. :thumbup:

    I can think of a software company or two that could learn from this kind of responsiveness.

    Thanks!

    Barrie

  17. Here's one that may be a bug, or may be just something to watch out for.

    When updating properties of a table, it takes longer to process those update depending on what part of the table is showing on the front panel.

    Looking at the attached example, it just populates a table with 1023 rows and 25 columns, sets the row headers, then changes the background color of one of the columns. Setting either the foreground or background color causes the problem. Setting a different property does (like Cell Size) not seem to cause any problems.

    Before setting the color, I turn on "Defer Panel Updates", then set the color and turn off the deferring. Doing this alone cause the color change to take about 7 seconds on my machine if you have the end of the table displayed (scroll all the down). Also note that if you are displaying the middle of the table it takes about 3.5 seconds to update. It seems somewhat proportional to the displayed position. It also has the same effect if you scroll horizontally.

    If you have the start of the table displayed, it takes about 300ms.

    If in addition to deferring the panel updates, I set the Index value to 0,0 it takes the 300ms.

    It has this same problem all the way back to version 6.

    The zip file has 4 versions of the file for LabVIEW 7.1, 7, 6.1 and 6.

    Ed

    4588[/snapback]

    This is more of a comment than an insightful reply but if my memory serves me, cell based operation can be sped up significantly by the following:

    Set the visible attribute of the table false

    Make all your changes

    Set the visible attribute of the table true

    This may not apply to later versions, but I do remember that this particular object is very graphically dependent. I don't think it is a bug per se, but the code is probably in need of optimization.

    Sigh.... Poor NI, :( so much code, so little time. ;)

    Cheers to all,

    Barrie

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.