Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,785
  • Joined

  • Last visited

  • Days Won

    243

Posts posted by Rolf Kalbermatter

  1. Hi ,

    I am trying to access my usb memory device Using the VISA. but facing some problems. does anybody have tried this before. if yes, can anybody help me out with this.

    Thanks

    Ravi

    4496[/snapback]

    Just as Michael said, if it is an USB memory stick Windows will already install a default driver for it to make it appear as an additional drive. VISA can only access USB devices for which no other driver has been installed yet. If any other driver claims a device already, VISA backs off and rightly so, as accessing hardware from two different drivers at the same time is looking for big trouble.

    Rolf Kalbermatter

  2. That would be a little strange it seems. Also you should consider that Hyperterminal actually adds a carriage return/line feed automatically to every line (after all you pressed the return key to tell it to send the string and the return key at least under DOS/Windows is equivalent to carriage return+line feed).

    Rolf Kalbermatter

    4554[/snapback]

    Oops this was meant to be a reply to the previous message.

    Rolf Kalbermatter

  3. just out of curiosity, are you sure there's no need for a termination character?  <cr> or <lf> can be appended to your command string if necessary.

    Regis

    4451[/snapback]

    That would be a little strange it seems. Also you should consider that Hyperterminal actually adds a carriage return/line feed automatically to every line (after all you pressed the return key to tell it to send the string and the return key at least under DOS/Windows is equivalent to carriage return+line feed).

    Rolf Kalbermatter

  4. Basically LabVIEW can handle 100 of loops in parallel, and does this with an amazing grace. The only thing you have to watch out is making sure that they are not free running, meaning that in each of them there is at some point an asynchronous function to limit its speed to that necessary for the task. Asynchronous nodes can be a number of different ones, the obvious ones being "Wait ms", "Wait until next multiple ms", and "Wait on Occurrence" but also the event structure itself. Also VISA or TCP functions and other ones with timeout input can be seen as asynchronous too in most cases, sometimes it is an option you need to enable on them (VISA).

    The only reason not to use to many loops is that you need to somehow manage them in your application. They need to be started at some point, maybe you need to have some synchronization at certain points, although they run completely asynchronous for the rest. At last but not least they all need to be told to stop gracefully somehow when the user decides to close the application.

    This adds overhead to your programming and also makes the application usually more difficult to understand, and with that in most cases also somewhat (and sometimes a lot) more difficult to debug.

    An architecture I have found to be very powerful for multi loop applications is to have each use its own queue as command input. This queue is polled inside the loop and decides the next step to execute in its case structure, really resembling a normal state machine. With some utility VIs you write, you can now send from anywhere in your application specific commands to a specific loop/state machine. You need to be careful however to design the loops and their functionality in advance and remember to adhere to this design at all times. Once you start to mix functionality in between loops in the heat of your development work, you really can end up with an application even you can't understand yourself anymore, not to talk about debugging and maintaining it later on, and even worse having someone else have to debug it!

    Rolf Kalbermatter

  5. From what I have seen in a few minutes, this options reveals a "huge" set of properties and methods from the Property Node- and Invoke Method-primitives. If they are different from the "SuperPrivateScriptingFeatureVisible", or just arranged in a different matter, I haven't checked.

    I think the topic BD.Open VI property falls into this segment.

    4487[/snapback]

    And it clutters those popup menus with so many items that normal working in LabVIEW with them is almost impossible.

    Rolf Kalbermatter

  6. From what I have seen in a few minutes, this options reveals a "huge" set of properties and methods from the Property Node- and Invoke Method-primitives. If they are different from the "SuperPrivateScriptingFeatureVisible", or just arranged in a different matter, I haven't checked.

    I think the topic BD.Open VI property falls into this segment.

    ...all in all, another nice and creative keyname. :rolleyes:

    Does anyone could find enough time to try if combinations with ,"huge","mega","giga","private","hidden","unrevealed",... resolve to any valid keyname in LV.ini??? :D

    Didier

    4487[/snapback]

    You seem to think we have millions of years at our hands ;-). Honestly, just do an ASCII string search on the LabVIEW executable. Unix has nice tools for that such as grep!

    Rolf Kalbermatter

  7. Maybe this isn't related to scripting really--at the moment I don't have much time to care or to figure it out personally--but I didn't see any posts on here about it either which I thought was curious.  Has anyone looked at this?

    4474[/snapback]

    It is nice to look at what you get by this if you have a lot of time at your hands! I haven't yet really found many reasons to actually use it, especially because use of this for production apps might be not such a good idea. As it is all about undocumented stuff really, NI is free to change this functionality at any time, by changing data types, behaviour, or removing the functionality for whatever reason and it won't be mentioned in the upgrade notes at all, so you can end up with some bad surprises when you need to upgrade your app to the next LabVIEW version.

    Rolf Kalbermatter

  8. Thank you very much, Rolf! One question: is it "legit", meaning you got it from some VI, on which NI forgot to set a password, or...?

    4206[/snapback]

    No, it isn't from a VI without password. I created it myself. Is that legit? Me thinks so!

  9. Much thanks. Yeah, the problem I was having was trying to match the CRC the VI was giving to a CRC generator written in C by a co-worker. There were complications stemming from the fact that his algorithm was assuming 16-bit words as opposed to the VI's byte-based cycles. Plus, he was running his code on a big-endian processor whereas I was running his code on my PC (little-endian) so there was byte-order switching going on that the VI didn't have to deal with.

    I finally got the VI to output the same CRC the C code was giving after figuring out what the VI was doing.

    Thanks for all the help and example VIs, though. Very useful. :D   :thumbup:

    3114[/snapback]

    If the C code has Endianess problems in itself I wouldn't trust it at all. It would indicate that the code was developed rather in trial and error methods, than by clearly understanding what the algorithme should do.

    Rolf Kalbermatter

  10. I was looking for CRC stuff myself recently. Go to NI, developer section and do a search for CRC. I found  between 5 and 8 algorithms / implementations (some may be repeats) as well as a RTF file that explains the whole bit shifting process and (possibly) how it is simplified in SW using a byte wide lookup table. search read enjoy

    4403[/snapback]

    Possibly one more CRC algorithme is in the lvzip package in the OpenG Toolkit. It is used to calculate a 16 bit CCITT CRC for the implementation of the Mac Binary format. Not sure about the correctness in terms of CRC theory of this but it seems to do, what the Mac Binary format requires, whatever that is. Other CRC algorithmes might be found on the Info-LabVIEW archives http://www.info-labview.org/the-archives

    Rolf Kalbermatter

  11. Hello all, I am using a sub VI that produces a byte array.  I use "Byte Array to String" and display this value using a string indicator.  I also want to take the 1-byte hex number and display its numeric value.  For example, 0xFF would produce 255.  I tried type casting and using "Hexadecimal String to Number" but it does not seem to be working.  Any help would be appreciated.

    4475[/snapback]

    Well, display of a number in a miriad of formats is that, display only. The number itself does not change in memory at all. So what can you do?

    1) Instead of displaying the byte array as a string array, configured to show hex format, you could directly display the byte array, click on the numeric in the array control and select "Visible Items->Radix" from the pop-up menu. Then click on the d that appears and select the numeric format you want to see. This will change how you see the number in the control but will do nothing to the numeric value itself.

    2) Wire the byte array into a for loop with autoindexing enabled and use the apropriate To String formating function, either the Format into String with %d, or %x as formating specifier or one of the String Conversion functions such as Numbre To Decimal/Hexadecimal/Octal String.

    Rolf Kalbermatter

  12. They look like two separate multiframe structures to me.  I had created them awhile back (under 7.0) and they appeared to work correctly, though context-clicking on them back then was a bit dicey (the editor would sometimes crash).  The 'Conditional Disable', at least under 7.1, allows selections of Windows, FPGA, PalmOS, RT Engine, Mac, Unix, and PocketPC.  The 'Diagram Disable' allows multiple 'Disabled' subdiagrams, and no more than one 'Enabled' subdiagram.

    The 'acid test' for me was sending a VI to Scott Hannahs (the handy LV-on-a-Mac test subject) that did some WinAPI DLL calls in the Windows case, and returned a timestamp through the case border.  For the Mac and Default cases, I just wired a constant timestamp (they were grayed out on my Windows machine).  Scott reported that his LV editor did try to search for the DLL file when he loaded the VI, but when it gave up, the VI wasn't broken.

    They certainly APPEAR ready for prime time...  Just gotta 'Wait for Eight' .

    Best regards,

    Dave

    2562[/snapback]

    I think they need a little more fine tuning, at least if NI doesn't drop a few platforms before 8.0. For instance Unix alone is a little bit a broad selector. Not everything which might work on Linux might be portable to Solaris, for one example. And the attempt to load the DLL on a Mac and similar issues should also be eliminated.

    Rolf Kalbermatter

  13. I'm novice...  :wacko:

    Can you tell me what is  the language used to write in script the LabView graphical language?

    Can I code all of my programs whith it, like a real program in LV?

    Thanks for your help. :thumbup:

    3709[/snapback]

    Your question is very unclear. LabVIEW itself is written in standard C and most new functionality since LabVIEW 6.0 has been written in C++. Other than that certain paradigmas are similar to how they are in C, there is no direct relation between the C programming in which LabVIEW is developed and the LabVIEW programming language you are using as a LabVIEW user.

    If you refer to the scripting features which are not yet officially released but discussed quite a lot here, that is not a language in itself and the term scripting is IMO rather misleading here. It is an interface exposed through VI server which gives the user access to the internal LabVIEW object hierachy. As such it gives a possible user quite some possibilities but the LabVIEW object hierarchy is very involved and nested and programming through this "scripting"interface gets very fast messy and involved. This is probably one of the main reasons the scripting feature hasn't been released to the public (and one of the first complains of most people trying to get into that scripting).

    Rolf Kalbermatter

  14. I'm waiting for this little feature since LV 4.

    Maybe be, someone has a good description of the .ico-File,

    that we can write a VI to

    Read a selectable VI-Icon

    Convert it to .ico-format

    save as...

    With LabVIEW 7.0 this is basically no problem. The functions to deal with .ico files are available in LabVIEW since about 6.0. Checkout vi.lib/platform/icon.llb. That are the same functions used by the application builder to read ico files as well as replace icon resources in the build executable. In LabVIEW 7.0 you also have a VI server method to retrieve the icon of a VI. Together these two things are all which are needed.

    There are however a few fundamental problems. The function to replace icon resource data works directly on the executable image (well really on the lvappl.lib file, which is an executable stub which is prepended to the runtime VI library and which locates the correct runtime system and hands the top level VI in that library to the runtime system). As such it can only replace already existing icon resources as doing otherwise would require relocating the resource table and its pointers, an operation which is very involved and error prone. Windows itself doesn't have documented API functions to store resources into an executable image, as this is a functionality not considered necessary for normal applications.

    lvapp.lib contains only icons for 16 color and 2 color icons for the size 16*16 and 32*32. Wanting to be able to have other icons would mean to add first those resolutions and sizes to lvapp.lib and improving the icon functions in icon.llb to properly deal with those extra resolutions. This is not really difficult to do.

    A different problem is that LabVIEW icons are always 32*32 pixels whereas Windows really needs 16 *16 pixel icons too, for displaying in the left top corner of each application window as well as in detail view.

    Rolf Kalbermatter

    • Thanks 1
  15. Excluding very old LabVIEW versions, you can assume that the first 16 bytes of a VI are always the same. In fact any LabVIEW resource file has the same 16 byte structure with 4 out of those 16 bytes identifying the type of file.

    52 53 52 43 RSRC

    0D 0A 00 03 <version number> ; this value since about LabVIEW 3

    4C 56 49 4E LVIN or LVCC/LVAR

    4C 42 56 57 LBVW

    Anybody recognizing some resemplance with the Macintosh file type resource here ;-)

  16. Callbacks in LabVIEW itself are, although possible since 7.0, indeed an anachronisme. But there are situations where it would probably make sense to use them.

    Callbacks in low level drivers are an entirely different issue. They are one way to allow an application to use asynchronous operations in a way without having to rely on interrupts or such things, which in modern OSes are out of reach of user applications anyhow. For cooperative multitasking systems this is basically the only way to do asynchronous operations without directly using interrupts or loading the CPU with lots of polling.

    Another possibility to handle asynchronous operations on multtasking/multithreading systems is to use events. LabVIEW occurrences are in fact just that. Eventhough LabVIEW wasn't a from begin on a real multithreading system, for the purpose of its internal diagram scheduling it came as close as it could get to real multithreading.

    Asynchronous operations are indeed inherently more difficult to understand and handle correctly in most cases. Especially in LabVIEWs dataflow world they seem to sometimes mess up the clear and proper architecture of a dataflow driven system. But they can make a big difference between a slow and sluggish execution where each operation has to wait for the previous to finish and a fast system where multiple things seem to happen simultanously while a driver waits for data to arrive.

    With the more an more inherent real multithreading in LabVIEW this has become less important but in my view it is a very good and efficient idea to use asynchronous operations of low level drivers if they are avaialalbe. They way I usually end up doing that in the past is translating the low level callback or system event into a LabVIEW occurrence in the intermediate CIN or shared library. Of course such VI drivers are not always very simple to use and synchronous operations should be provided for the not so demanding average user. They can even be based on the low level asynchronous interface functions if done right.

    But as long as you stay on LabVIEW diagram level only, callback approaches seem to me in most cases a complication of the design which is not necessary. As you have properly pointed out, having a separate loop in a LabVIEW diagram handling such long lasting operations is almost always enough.

    That is not to say that Jims solution is bad. He is in fact using this feature not strictly as a callback but more like a startup of seperate deamons for the multiple instances of the same task, a technique very common in the Unix world. In that respect it is a very neat solution of a problem not easily solvable in other ways in LabVIEW.

  17. Basically during development directories are a lot better. For distribution llbs may be handy.

    Library structure

    Pros

    > * Unused files are automatically removed from library at save time.

    This is not true. You have to load the top VI and selecte Save with Options to create a new directory structure or library which only contains the currently used VIs. This is the same for LLbs and directories.

    > * Upgrading your software to newer LV versions is easy because libraries can hold all the device drivers as well

    I think you talk about instrument drivers. Keeping them in a subdirectory to your project dir would achieve the same.

    * Moderate compression of maybe 10 - 30 %. Nothing really to write home about.With harddisks costing dollars per GB and the fact that decompressing will also take performance everytime LabVIEW needs to load the VI into memory. For archiving purposes it is a good idea to ZIP up the entire source code tree anyhow.

    Cons

    > * Returning the files to directories can't be done, or at least I don't know how.

    Wrong as pointed out.

    > * Referencing a file inside the library might be possible, is it? I think not.

    With VI server for sure. If you talk about accessing in in Explorer this is possible too. LabVIEW 7 and later has a feature in Options which installs a shell extension which allows you to browse LLB files (and see the VI icons of any LabVIEW file in the Explorer)

    > * If naming is not done correctly, it can be a drag to find a certain file from large library.

    Again with above shell extension (assuming you use Windows ;-) this problem is eased upon.

    * Source code control won't work easily.

    Directory structure

    Pros

    > * Easy to group logical set of files to different directories

    Yes!

    > * With dynamic calls, run time editing is possible

    No difference to LLBs. You just have to consider that LLBs are handled by LabVIEW like an additional directory level with the LLB name as directory name.

    * Source code control is much more effectively possible

    Cons

    > * Upgrading LV version changes device drivers to new version also. They may but more probably may not work.

    No difference to when using LLBs. You will need to keep the instrument drivers together with your project for that to work, but that is also what is happening when they are copied inside the LLB. LabVIEW has also no preference to use VIs in an LLB instead of in a directory. It simply will try to load the VI from where it was last located when the calling VI was saved and if not found will start to search its standard paths indifferent if they are in LLBs or not.

  18. As long as it is about retrieving version information for executables and dlls you should try to look for the GetFileVersionInfo() and VerQueryValue() functions. Not really sure if they are in user32.dll. MSDN says they are provided by version.lib which would usually mean there should be a version.dll somewhere.

    Changing this information on existing files is a VERY bad idea as it will screw up installers which you might try to use to upgrade or uninstall existing applications.

    Accessing property pages of other Explorer namespace objects is very difficult to do without COM (the base of OLE) object handling.

  19. So I guess the ability to add code to custom controls isn't a huge distance away.

    Another idea might be to use sub panels for this. Create the control as a subVI with all the code you might feel necessary and allow some sort of communication to the main VI for instance through user registered events or queues. Then whenever you need the control insert it with a simple call into a sub panel control.

    Rolf Kalbermatter

  20. One thing Michaels' code allows to do is to be able to add the capability to define file patterns.

    If you pass anything but *.* or leave unwired you will not get back any Folder names.

    Well that is not entirely true. It's only true because you usually don't have folders with endings but if you name your folders for instance Folder.dir a search pattern of *.dir will return those folders. Of course I agree with you that looking for particular file patterns is a nice feature and I thought the OpenG version does that also. The problem with this is the requirement of the two List Directory Nodes for each hierarchy level which will almost take double the time than one node does.

    I think with preallocating the arrays and such one can win another few percent but the real time spent is in the List Directory node.

  21. And I'm positive that they actually use some cryptographic algorithme such as MD5 or similar to protect the password so trying to fake the password is a rather difficult if not useless approach.

    The only way I could see is in patching the LabVIEW executable bij removing the password check altogether. But that is beyond my abilities.

  22.       Temperature (MSB)      19

          Temperature (LSB)          20

          "numbers relate to the byte number in the packet"

    these are raw data and if i want to convert them I've to do some operations on them.

    Your approach will work too, but since your data is really a 16 byte short in big endian format you could take advantage of the fact that LabVIEW's flattened stream format is also normalized to be in big endian.

    Just pick out the interesting bytes and wire them to the Type Cast function in Advanced->Data Manipulation. Wire a int16 constant to the middle terminal e voila you get your numeric 16 bit signed value. This is simple fast and will work on any LabVIEW platform independant of the underlying endianess of the CPU, since LabVIEW takes care of the necessary (if necessary) byte swapping for you.

    If the number in the stream would be in little endian format you would just have to add a Byte Swap node from the same palette to the wire after the Type Cast. Really swaps bytes twice on little endian machines (only Intel CPUs ;-) but it's most probably still faster than doing the byte extraction and joining on your own.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.