Khalid
-
Posts
160 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Gallery
Posts posted by Khalid
-
-
Salut,
You can check the following example in LabVIEW:
examples\instr\smplserl.llb\Basic Serial Write and Read.vi
This should get you started.
Salut,
-Khalid
-
Hi,
Are you using an ActiveX Control? Which one? Have you tried with any other PPT file?
-Khalid
-
Hi,
This is my typical solution to such problems: download Portmon (or any other such tool) and monitor the actual bytes going out from the HyperTerminal, and compare it with LabVIEW's. You will be able to find some discrepancy this way.
http://www.sysinternals.com/utilities/portmon.html
In Portmon, I find it easier to work if I set the mode to HEX, and filter unnecessary commands and just get the READs and WRITEs.
Hope this helps.
-Khalid
PS: as for not seeing what you're seding in HyperTerminal, there's a setting to "Echo typed characters locally," under ASCII Setup -- this should do it.
-
-
On similar lines, is there a shortcut to cycle through the Error list? I sorely miss this in situations where I have to go back to the Error list and double-click the next error to go to it. A shortcut -- similar to Ctrl+G for Search results -- would be nice.
-Khalid
-
I fully agree with Michael. However, if you're stuck with the Multicolumn Listbox for now, create a Property node for it, select 'ItemNames' property and wire-in an empty array. That should clear it.
Regards,
-Khalid
-
Very interesting!
Do you know of efforts in getting SQLite compile on RTOSes like Phar Lap or VxWorks?
-Khalid
-
EasyDAQ also has some inexpensive serial-based DAQ cards:
Not sure if Comedi has drivers for them though.
-Khalid
-
Hi,
In 7.1.1, should we be able to create executables when using scripting nodes? I get the following error when I try to run the executable created this way:
"This VI is not executable. The full development version of LabVIEW is required to fix the errors."
If I remove those VIs which have scripting nodes, the executable works just fine. I even tried placing the SuperSecret... entry in the ini file for the built app. Doesn't help.
Any ideas?
-Khalid
-
Did this work for you? Are you able to successfully communicate now?
Regards,
-Khalid
-
When I connect pin 2 of my RS232 cable to pin 2 of my multimeter my program receives data!!!
Joost,
If connecting Pin 2 to 2 works, then you probably need a "straight" serial cable, as opposed to a null-modem serial cable where the RX and TX pins are crossed.
Regards,
-Khalid
-
...
Did that answer your questions?
Catfish ...
Yes; thank you!
-Khalid
-
Hi Catfish,
Imo, 1000 IO points is a LOT for data sockets. If the OPC server is fast enough, the data socket server *might* keep up, but the CPU load would probably be pretty high. The 6.1 DSC engine was fast enough & quite efficient (for "memory tags"), but its the OPC server & the PLC(s) that may not be fast enough.Just wanted to make sure I understood your setup correctly. You used Memory tags in DSC to talk to the OPC Server? I am guessing via DataSocket. Why not create tags directly off the OPC Server? DSC as an OPC Client can browse OPC Servers and allows to create tags directly off of it. Or, is this what you did? Then, why use Memory tags?
Thanks for the clarification.. in advance.
-Khalid
-
LabVIEW is sending out 4 bytes because you are converting a U32 (a 32-bit number) to characters.
To fix this: right-click on the constant "25," select Representation and then U8. This will then output just one byte, i.e. Length = 1.
Hope this fixes it.
Regards,
-Khalid
-
Can you get valid data with HyperTerminal? You will want to give that a try first. And when that works, clone the Serial port settings in LabVIEW.
Good luck!
-Khalid
-
Hi,
If even the HyperTerminal doesn't work, then the issue is not specific to LabVIEW, but may be more hardware related.
One thing that you can try is: download Portmon (or any other such tool) and monitor the actual bytes going out from the IAI, and compare it with LabVIEW's. You _may_ be able to find some discrepancy this way.
http://www.sysinternals.com/utilities/portmon.html
Hope this helps.
-Khalid
-
H4med,
I would like to know what is the "HScroll1.Value" in your VB program.
From the statement "MSComm1.Output = Chr$(a)" it appears that you are just converting the "HScroll1.Value" to a character and writing to the output. In LabVIEW, as you can see in the VI I attached earlier, we are taking a number and converting it to a character and writing it out. If this doesn't work, then the problem is what numbers are we converting -- in the VB program versus LabVIEW -- before converting them into characters and writing out.
I strongly recommend that you download Portmon (or any other Serial port monitor) and monitor the actual bytes going out from your VB program, and compare it with LabVIEW's output:
http://www.sysinternals.com/utilities/portmon.html
Hope this helps.
-Khalid
-
In your first post you said you wanted to send 1 byte out. This translates to one character. I am attaching the modified VI that uses U8s and type casts it to a character. Try it out.
What are you sending out from your VB program? Can you attach the relevant code snippets? Or maybe some documentation/manual for your PWM device.
-Khalid
-
Without knowing the details of your setup, the first thing that comes to my mind is: why not use the Select function (from the Comparison pallette) and pass out an empty array (constant) when the checksum fails?
-Khalid
-
Depends on how serious your project is. Here you will find a LabVIEW toolkit for biometric authentication -- face and fingerprint recognition:
http://www.jyestudio.com/biometricsview/products.shtml
Regards,
-Khalid
PS: I haven't personally used this toolkit, but I have heard it to be of good quality.
-
One byte should be just one character, right? You will want to make the datatype of the number to be U8 (and not double or single as you have on the diagram). Then, typecast this to a string (you will find the Type Cast function under Advanced >> Data Manipulation palette.)
Let us know if you still have issues.
-Khalid
-
Hi Tim,
Are you interested in (computed) tags from Wonderware, or IO tags from RSLinx? If it's the latter, and if the RSLinx is an OEM version (or better), then it is an OPC Server by default. You could browse to it using the front-panel DataSocket item browser.
If you _have_to_ use DDE, try the following in Excel (first, on the same machine as RSLinx):
Cell formula - "=RSLinx|topic!address"
E.g.: "=RSLinx|PLC1!N7:0"
If that works, you should then try the NetDDE in Excel. And then in LabVIEW.
If it's the former, i.e., computed tags from Wonderware itself, maybe you should contact Wonderware to get the DDE Application, Topic, Item, etc. (I don't think Wonderware/InTouch is an OPC Server by default... at least it wasn't a few years back).
Hope this helps.
-Khalid
-
There was discussion on this issue a few months back on the Info-LabVIEW list. Here's what NI R&D had to say about it:
stephen mercer to PJ, info-labview, meAug 1, 2005
PJM wrote:
> Khalid & all
> > Also, another issue I have noticed is with
> > enum-typedefs in an array
> > constant. It appears that whenever the typedef is
> > updated, the array
> > constant resizes itself! This is in 7.1.1.
>
> I did noticed this myself (on array of stric type def
> enums), and this is really annoying.
Annoying... except for those people who want their arrays to resize.
"Whether we should update the array" is is one of those areas where every programmer is going to have his or her preference, but that preference is going to vary, possibly on each individual enum array. We have one developer on the LV R&D staff who is particularly studying typedef updates, trying to pull off the neat trick of "reading the user's mind." But, as you might imagine, the problem is fairly complex. The array has a list of current values. The enum defines a list of all the possible values. Should the minimum size be
a) the text size of the currently visible array element(s)
b) the text size of the longest array element
c) the text size of the longest enum element (even if that element isn't currently in the array)
d) no minimum bound
Let's say we allow (as we do today) arbitrary sizing of the enum within the array. If the array's enum display has been manually resized smaller than the array's longest current element, then it is probable that the user doesn't want it to update if the elements change. But what if they edit the array such that all the elements fit in the current resize area and then they edit the elements a second time, such that one element doesn't fit. Should the array now grow? Is an edit to the text of the enum (which applies itself to the array) more important of an edit or less important than editing the values of the array? What if this user is one of those programmers who shrunk the array down so the text is only as large as the particular element currently visible in the array. These programmers probably want it to resize when they change the value, not just if they were to update the typedef. The problems have more variations and complexities if not only is the the enum a strict typedef, but also the array of enums is itself a strict typedef.
After three versions of LV with typedefs on the diagram, we are finally getting enough data feedback to make some better decisions about when/how to update instances. But some very fine lines exist between the desired behavior and the undesired behavior, and trying to store enough data about each instance to make that judgement is non-trivial. At some point in the future, this will probably improve. I'm posting this info as a way of highlighting the complexity of this issue. Even though some of the desired behaviors posted in this thread (and earlier threads on info-LV) would be easy to implement, those changes often conflict -- in surprising ways -- with other desired behaviors.
Pojundery,
Stephen R. Mercer
-= LabVIEW R&D =-
-
Kawait,
This is not a direct answer to your question, but is there any particular reason you are using LLBs? Other than for grouping your VIs?
LLBs are not recommended for development these days. You will want to go through the following two threads for further details:
http://forums.lavausergroup.org/index.php?showtopic=355
http://forums.lavausergroup.org/index.php?showtopic=2118
Regards,
-Khalid
my ADO-Toolkit for LabVIEW 8
in Database and File IO
Posted
I believe the palette size (number of columns of VIs) actually depends on the filename-length of the VIs on the palette. The longest VI name on the palette is how wide the palette will be. Like this:
-Khalid