Jump to content

psychomanu

Members
  • Posts

    27
  • Joined

  • Last visited

    Never

psychomanu's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Hello, I would like to see a probe with a timestamp to see WHEN data passes instead of the data itsself, for easy evaluation of the time spent between two points in the code. Or does anyone know an easier way than adding a tick count before and after and calculate the difference? Manu
  2. QUOTE (Michael Aivaliotis @ Apr 3 2009, 05:11 PM) The design I included was just to illustrate the principle. I'm aware of how to use error terminals to control dataflow. I disagree that stacked sequences are per definition bad practice. There are situations where I prefer a stacked sequence over a bunch of daisy-chained subVI's, especially when dealing with code that I prefer not to be in subVI's, like property nodes of front panel items or so. QUOTE (psychomanu @ Apr 3 2009, 11:54 AM) Now you need sequence locals for each step, and if you need to insert a step later on you need to rewire things. I use local variables to propagate the error cluster, but that's 'bad practice'. QUOTE (Michael Aivaliotis @ Apr 3 2009, 05:11 PM) Exactly. This is why you don't use sequence structures. With my suggestion this argument against stacked sequences no longer applies, making them perfectly good practice, just like any other stacked structure like a case structure or event structure. Anyway, it was not my intention to start endless discussions, I guess this comes down to personal taste. Thanks for all reactions.
  3. You're absolutely right. But the very reason a stacked sequence exists is because a flat one sometimes takes too much space, and then a shift register would come in handy. Thanks anyway, Manu
  4. I would love to see a shift register in a sequence structure, to propagate an error cluster for example. Now you need sequence locals for each step, and if you need to insert a step later on you need to rewire things. I use local variables to propagate the error cluster, but that's 'bad practice'. Not so with this type of structure (this was made with Paint, just not to confuse anyone, it does not exist in LabView as far as I know): Looking forward to reactions, Manu
  5. Hi again, I posted the same question on the NI discussion forum ( http://forums.ni.com/ni/board/message?boar...ssage.id=136822 ) and they acknowledge that this is indeed wrong behaviour. So be careful when reading values from intensity graphs of large arrays! Manu
  6. Hi, We use intensity plots to display relatively large images (2Kx2K), which are quite noisy in some parts, and less in other parts. The problem is that the apparent average intensity in the noisy regions is too high. It is a display problem, because if you zoom in it dissappears. Just have a look at the included picture that displays noisy data (randoms between 0 and 1). As long as the number of pixels is low (relative to the number of pixels on the screen) there is no problem (left image). When displaying larger arrays you expect to see about the average value (0,5). Instead it prefers to display the brighter values. This is wrong behaviour IMHO. When an array has to be downsampled to be displayed on screen, it should happen in an unbiassed way, not preferring higher or lower vvalues. The IMAQ intensity plot behaves correctly, as does any image processing software I tested. Any comments or solutions? Thanks, Manu
  7. Thanks for this elaborate answer. We realise we are at the edge, but we had hoped to be able to use at least the 2 GB. With a /3GB flag in your boot.ini you can tell Windows to allow a single application to grab up to 3 GB (we use a third party program that actually does this), but with LabVIEW we still get stuck around 1 GB. I guess we'll have to work in chunks of about 700 MB. That said, their is a function described in the code interface reference manual, called DSMaxMem (and AZMaxMem) that is supposed to return "the largest contiguous block of memory allocatable". However, it always returns a zero (we made no error in datatype or something). Do you happen to know why, or maybe another way of finding the largest possible memory block (without resorting to virtual memory which is way too slow for what we are doing)? Thanks again, Manu.
  8. Hi, we have an application that needs an extremely large array of singles in memory (array sizes up to 1.6 GB, preferably more, size varies) With LabVIEW we can fill the memory only up to 1 GB (appr. 750 MB array, rest = OS) even though 4 GB RAM is present. With C we can go up to the Windows defined limit of 2 GB (appr. 1.8 GB array, rest = OS), so RAM fragmentation is not the cause of the 1 GB limit. Question 1 : does anyone know why the 1 GB limit exists in LabVIEW? Question 2 : does anyone know a way around this Question 3 : could we do the allocation in C and then let LabVIEW access this memory space? Thanks in advance, Manu.
  9. Hi, I tried it, it works, but only if you have .NET installed. Not everyone has this, so I would rather have a way that works on all common configurations. Is there no API-call to kernel32.dll or something similar?
  10. Hi all, does anyone know how to get the CPU load without using ActiveX components? There must be some API-call for this but I didn't find it. Thanks.
  11. Hi all, We would like to read TIF format image files, but without using VISION routines because they require expensive licensing when we want to distribute our software. Anyone who knows where to find such a vi? Thanks.
  12. Hi, I had tried this before, but it requires knowledge of the IP address which I don't have. But I now realise that I can use 127.0.0.1 as IP to find the local MAC, so thanks a lot, Manu
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.