Jump to content

gb119

Members
  • Posts

    317
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by gb119

  1. QUOTE (Michael_Aivaliotis @ Sep 12 2008, 01:42 AM) The cleanup tool is kind of fun - but I'd really like it if it could be persuaded to work on sub-diagrams. I can't think of the number of times that I've written a state-machine and while the basic structure is clean and tidy, there's one case that is a total mess. As things stand with the current clean-up tool it will cleanup my mess and then (to my eyes at least) de-optimize my state machine. For example, if I'm passing data through the state machine with shift registers, then I've normally set the y position of those shift registers with some reason - I think (without extensive testing) that the cleanup wizard is rather fond of moving and re-ordering them. Short of having the ability to selectively 'lock' BD objects I'm not sure how one would encompass that sort of style decision in a style schema. I do really like the idea of a code-cleanup tool that could 'learn' what a user thought was good style - I've seen text programming code pretifiers that inferred correct style from samples - it would be really neat if the code cleanrup tool came with several 'style definition' vis that you could edit into your idea of good style...
  2. QUOTE (Aristos Queue @ Aug 1 2008, 10:20 PM) Though I see that it doesn't seem to handle Variant Attributes yet - which is shame as until the To More Specifc class bug gets fixed, variant maps are one of the more efficient maps and it would have been nice to have had an easy way to save them to disc or squirt them over the network in a human parseable fashion... oh, well next time perhaps ?
  3. QUOTE (wkins @ Aug 11 2008, 01:05 PM) In terms of clarity of coding and most efficient use of memory transfering data via the normal connector pane and wires is the best way to go. If you have many items of data that are related consider using a cluster to hold them all together on a single wire and a single input/output terminal. If programming in LV>=8.2 then using a LabVIEW class rather than cluster has advantages in terms of encapsulating the data and potentially improving maintainability. Passing data between sub-vis without sending it through the parent diagram is not great style, but you should look at notifers and queues if you really want to do this. QUOTE (wkins @ Aug 11 2008, 01:05 PM) 2. What are the best most efficient methods of transferring data between seperate (parallel) while loops on the same block diagram? Queues are the way to go here. Again, if you are passing large amounts of related data, then define a cluster (look at Type Def controls), create q queue with that type as the element outside both loops and then pass the queue reference to each loop. Queue data in one loop and read it in the other. Have a look at the examples of produce-consumer design patterns to get the idea.
  4. The University of Cambridge one is almost entirely pointless at 32x20px (what do you expect of an institution that predates the printing press never mind computers )
  5. QUOTE (ktvz @ Jul 29 2008, 03:28 PM) Yes, but the original poster wanted (and I have no idea why other than to show that LabVIEW can't create 'proper' console apps) was to open a console window and type something like "cat input.txt | foo.exe > bar.dat" where foo.exe was a compiled LabVIEW program. So it's easy to see how to hide foo.exe's GUI and there are lots of ways to get that GUI to open a window that is a console window or looks like a console window and spit out text, or even to open a console window and read in text that the user types in. In your examples, if you don't create the console window, what happens when you call GetStdHandle ? If it works as above then that's very nice for answering mavericks who want to use LabVIEW to create a text-mode application
  6. Ouch ! The VariantType utility functions in vi.lib include a vi that will give you the class filename of the class 'on the wire'. Does that do some horrible memory copy as well, and if not, could it be used to test for the 'true' class of the wire (if the testing code had enough intelligence to know that class A.lvclass was in fact a descendent of class B...) or is the problem that you always actually need the wire of the descendent class ? Edit: oops, reread the original post and realised my question is stupid. It's been a long day.
  7. QUOTE (Rio C. @ Jul 29 2008, 07:50 PM) All of this is sound advice - I find the flow charts of the trigger and arm cycles very useful for these instruments in figuring out how to set things up. Essentially you want to have the 6221 emit a trigger signal after it has set the current, have the 2182 wait for this trigger pulse and then have the 2182 emit a trigger pulse after it has doen the measurement for that point to signal the 6221 to move to the next current step. From memory, the built in modes that hav the 6221 and 2182 work together automatically are rather limited and I'm not sure if you can persuade them to do an arbitary waveform (you can do arbitary pulsed measurements and differential conductance linear sweeps and I think a range of straight and log dc sweeps). If you do have to 'roll your own' then you will need to configure the 2182's buffer to match the number of points that you're defining for the source waveform. Also, whilst it is possible to get the 2182 to talk to you via the 6221, it might be easier to just hook both of them up to GPIB. In my experience, the Keithley drivers are not very good (but frankly, most manufacturer's drivers aren't very good !) so writing your own is probably the way to go. If I get a chance I'll put my 6221/2182 drivers up somewhere - although they're a bit complex because they're a LabVIEW OOP heirarchy (e.g. Instrument Class -> DMM Class -> 2182 Class etc). One other comment, you mention wanting to read the current back - the 6221 doesn't actually measure it's own current directly. You could arrange to measure it (in principle) by having a standard resistor in series with current and hook the second channel of the 2182 across it and measure the voltage drop across it to get the real current. On the other hand, you could just trust that the 6221 is doing what you told it to do...
  8. QUOTE (Tom Bress @ Jul 29 2008, 02:37 PM) Ah, but light blue or dark blue ?
  9. QUOTE (ASTDan @ Jul 11 2008, 09:10 PM) The term dictionary is borrowed from Python. Other languages would call these data structures hashes or associative arrays or maps. Basically it's an array that stores data indexed by a non-integer key. Typically one would use a string, so rather than having surname[1]="Smith" you would have surname["James"]="Smith". This can make it very easy to construct data storage where items of data naturally have names rather than positional indices. I tend to use a dictionary structure to store experimental metadata eg. sample temperature, applied magnetic field, instrument settings like ranges, heater powers etc. The OpenG dictionaries are far from the only implementation - in recent versions of LabVIEW the variant attributes can be used to make a reasonably efficient dictionary, as can implementations using single element queues. There's been several discussions in recent months over the most efficient implementation. Here, for example.
  10. QUOTE (mross @ Jul 3 2008, 06:30 PM) LV 8.5 (PDS anyway) has a point-by-point differentiation vi, although I've never actually used it myself. What I've tended to do is to use a shoft register to store thae last n data points (and timestamps if the data is not being evenly sampled) using a rotate array and replace array element to insert the new data points and having initialised the array with the first datapoint repeated n times. You then feed that to a linear fit and use the slope as the first derivative. That is more robust against eexperimental noise than just using (y[n]-y[n-1])/t[n]-t[n-1]), although it does mean that the derivative lags behind the data by n/2 points, which is a problem if the data changes slope suddenly. You can work around this a bit at the expense of a noiser derivative by fitting a higher order polynomial and using the co-efficients of the fit to calculate the derivative. That's broadly what http://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_smoothing_filter' rel='nofollow' target="_blank">Savitsky-Golay filtering does.
  11. gb119

    Hello!

    QUOTE (crelf @ Jun 18 2008, 03:11 PM) Hey ! :thumbup:
  12. QUOTE (Chthonicdark @ Jun 8 2008, 02:49 PM) Ah, so it's a question of given an array of bytes of data (as a string or a U8 array, how do I turn it into a string of 2 digit hexadecimal numbers separated by spaces ? Something like the attached picture will do the trick (though it does tack on an extra newline marker as well which you might want to remove). http://lavag.org/old_files/monthly_06_2008/post-3951-1212935196.png' target="_blank"> edit: the string wired into the deliminator terminal of the array to spreadsheet string is a single space character (\s) - should have turned on \ codes display - sorry.
  13. QUOTE (jgcode @ May 22 2008, 12:55 AM) This sort of data structure is known as a dictionary or associative array or hash array or map in other languages. There's actually quite a few ways of implementing it in LabVIEW. There's a discussion of the efficiency and speed of a number of these in http://forums.lavag.org/Map-implemented-with-classes-for-85-t8914.html' target="_blank">this thread. My personal preference is to use the attributes of a variant to store the data. Essentially, rather than having two arrays, one of data and one string keys, you have a single variant on the shift register and then use the Get/Set/Delete Variant Attributes to manage the stored data - this seems to be one of the most speedy mechanisms up to several 1000 keys.
  14. QUOTE (prads @ May 21 2008, 06:58 AM) QUOTE (jgcode @ May 21 2008, 07:42 AM) Maybe try index array? Sounds more like Interleave 1D Arrays to me. If the inputs to be multiplexed are all of the same type then probably want to build them into a 1D array. If each input is a single scalar number then the node 'build array' will do this. If the inputs are each arrays, but you want the output to take the first element from each input and then the second and so forth, then 'interleave 1D arrays' is what you want. If the inputs to be multiplexed are all different types. the the bundle/bundle by name nodes will assemble them into a single data structure, although the ordering of the elements is not easily accessible, so it's not perhaps a useful thing to do unless you just want to handle all the items of data as a single entity.
  15. QUOTE (Omar Mussa @ May 16 2008, 04:45 PM) I've been bitten by a couple of clss/project file mis-features in LabVIEW 8.2.1 (and as far as I know, though I've not exhaustively checked, 8.5.1). One I detailed in this LAVAG thread - basically you can lock up a class file so that you need to use a text editor (or revert using your SCC system). This method doesn't rely on LabVIEW crashing and doesn't strictly corrupt the class file - it's all a perfectly legal albeit un-editable class. The second one that strikes with depressing regularity is LabVIEW corrupting the project file. As far as I can work out the sequence goes like this: * You have a project file that includes several files in different directories with the same name - say dir.mnu for example. Some of these directories are not in the LabVIEW search path (e.g. My Documents\LabVIEW Data\....) * You move the whole directory structure to a new location using the OS file manager. * You try to reopen the LabVIEW project. LabVIEW realises that the project has moved because some files aren't where they were supposed to be, so LabVIEW starts searching for them. LabVIEW picks the first matching name it comes to. With a file like dir.mnu that's not going to be too difficult... * LabVIEW repeats this several times for each missing dir.mnu file, each time locating the same wrong dir.mnu and happily changes the project file. *You save the project file and then re-open it. LabVIEW now complains that the project file (or library) is corrupt because iit has two or more entries with the same URL. The immediate problem here is that LabVIEW doesn't check for duplicating URLs when it 're-finds' a moved entry, thus allowing LabVIEW to corrupt its own project and library files. The underlying problem is that the re-finding algorithm isn't as clever as it needs to be. If the project/library files stored their own save path in the file then they could detect and calculate what the move 'vector' was and then before searching the standard search path for a matching name, could try looking for a new relative path. 9 times out of 10 a whole directory structure has been moved so that should relocate entries correctly. Again, a revision control system which can diff two files quikcly puts it right, but it should not be possible for LabVIEW to corrupt its own files except of course in the case of a power loss or program crash.
  16. QUOTE (Justin Goeres @ May 18 2008, 02:55 PM) I use a 13.3" (1280x800) laptop which is even a bit smaller than your proposed 15", with an addional external 22" monitor on my desk. The laptop screen does for most development tasks, but the extra real-estate is a nice luxury. Still, I tend to feel that if I have a vi whose diagram needs more than 1280x800 is probably got a little bit out of hand and needs some refactoring
  17. QUOTE (Tomi Maila @ May 15 2008, 07:29 AM) Yes - I hadn't thought through the problem of reentrant dispatch members. My code did handle constructing a reference to the right version of the dynamic dispatch method, BUT, only if the method was implemented in all the classes. One could probably write code that traverses the class hierarchy looking for the correct dynamic dispatch method yo open a reference to, but it still falls over on reentrant vis. So, there doesn't seem to be a good way that avoids code in the embedded methods to handle doing the embedding. I think I'd probably would (and in fact will do) pass the reference to the subpanel control into the method and handle the embed and unembed within the dynamic dispatch method.
  18. QUOTE (Dan Bookwalter @ May 14 2008, 09:12 PM) But forgot to tell us the answer Which is yes you can, but it takes a little bit of work to get the right vi reference to insert... The attached project (8.5.1) is a quick hack I did to test what the answer was - the picture below shows the guts of the calling program. The mystery vi that takes a LabVIEW object and returns the class path that lets you build the string to properly qualified vi-name can be found in <vi.lib>\Utility\LVClass\ Download File:post-3951-1210796693.zip Oh yes, and I should have closed that vi-reference....
  19. QUOTE (Aristos Queue @ May 7 2008, 02:58 AM) Why 4 - as opposed to any other small positive integer ?
  20. QUOTE (Sebastian @ May 5 2008, 09:21 PM) In your vi.lib/Utility/VariantDatatType folder you will find all kinds of useful vi's for poking around at the datatype of a variant control. Disentangling the datatype of a cluster is a little tricky since you have to handle things like clusters of clusters and clusters of arrays of clusters.... Attached is a quick hack that will give you a string describing the type of any control wired to the variant, including working down arrays and clusters recursively. It doesn't handle refnums, but you can get the basic idea... Download File:post-3951-1210079267.vi
  21. Firstly, a caution. Most LAVA folks don't take kindly to be asked to do someone's homework assignment for them. However you seem to have at least made an effort to get somewhere which is better than most... You might want to think about what the effect of raising 2 to the power of your integer and then looking at the binary representaiton of the resultant number might be. A handy way of inspecting the bits of the binary representation of an integer is to use the number to boolean array primitive. Then you might want to read up on shift registers and also on boolean operations as a way of latching a boolean value to true. QUOTE (MicrochipHo @ May 5 2008, 05:57 PM)
  22. QUOTE (wan81 @ Apr 28 2008, 02:24 AM) Use a List Folder primitive to get a list of the files in the target directory - assuming that files are all sensibly named so that they have the same extension then you can get just the files that match the desired extension, and then feed those paths into an image loading function within a for-loop. Something like this: Here's the vi in 8.5.1 Download File:post-3951-1209377942.vi
  23. QUOTE (Micael @ Apr 25 2008, 02:33 PM) Thanks, I'll fix it in the http://code.google.com/p/lavacr' rel='nofollow' target="_blank">SVN repository version later today and release a new package sometime after that.
  24. QUOTE (Tomi Maila @ Apr 24 2008, 01:27 PM) Hmmm, I'm pretty sure I didn't put the global in there... on the other hand I'm also pretty sure I just ignored byte-order issues all together (the original version of the code read images generated on a Windows machine (from an FEI FIB200 system for those that are interested) and displayed on a Windows LabVIEW). Perhaps we should put the TIFF library up on the new Lava code-repository project on Googlecode to keep a track of bug fixes to it.... Edit: Now done - in 8.5.x\Machine Vision and Imaging\TIFF File Reader Probably the easiest thing to do is to LVOOP up the internal file processing routines and then the endian-ness can be kept to a per image setting without globals. I think I can see one bug already with the Get Tag value has two outputs - single value and multiple value, but for one type of endian-ness the former isn't being set...
  25. QUOTE (Michael_Aivaliotis @ Apr 21 2008, 11:55 PM) Would seem sensible to me. Do we create separate directories for every minor bug0fix version - how big is the binary difference between an 8.5 and 8.5.1 vi ? Is it worth it stop the reporistory filling up with trivial recompile differences ?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.