Jump to content

shoneill

Members
  • Posts

    867
  • Joined

  • Last visited

  • Days Won

    26

Posts posted by shoneill

  1. QUOTE(tcplomp @ Nov 14 2007, 04:50 PM)

    Well this is the Code Repository Support forum, so I guess Eugen is asking how long it takes between an submission and a approvement.

    In my experience it took between 1 day (the Code Capture Tool which was very mature) and something of 3 weeks.

    When your submission is rejected because of style or something else, and you resubmit a better version the CR approval team is not notified so you should do that by hand.

    Ton

    Well, I generally follow links from the front page of LAVA listing the most recent posts. As such, it wasn't obvious to me at the time that the post was in the Code Repository Support forum. That would explain things. Thanks.

    I'll pay a bit more attention in future :oops:

    Shane.

  2. I second that with one small addition.....

    I've often wondered why I need so much screen space when programming ActiveX. I'd love to have an ActiveX text structure which accepts some ActiveX reference as input and from that point allows me to address the objects and methods in text mode. Just imagine how much easier this would make ActiveX programming in LV....... Of course, method and property prompting would be required based on the objects in use.

    I'd very much welcome something likt this in general. Like an advanced Formula node.....

    Shane.

  3. QUOTE(Jeffrey Habets @ Nov 8 2007, 12:55 PM)

    Hi guys,

    I have the habit of always labeling my shiftregs on the left hand side of the loop they're on. I use free labels for that, but it would be nice if the labels where (optionaly) automatically created (like with objects on the FP). They would be attached to the shiftreg, moving with it when the shiftreg is moved up or down.

    besides being a handy feature for myself, it would probably also encourage others who don't label their shiftregs now to do it anyway, because it would be the next logical (more or less forced) step after creating the register.

    What do you guys think?

    Yup, sounds like good idea with solid reasoning.

    I look forward to this in future versions......

    Thanks

    Shane.

  4. QUOTE(Norm Kirchner @ Nov 8 2007, 05:46 AM)

    I would have to say that the parent will never adopt the better practices of the child as the parent will see the innovative things that the children do as cute and truly excessive as all parents know everything and need to never adapt and grow and change. That would be just silly. Heck, if a parent actually changed because of the child... shit... the universe would swallow itself whole. No.... no.... it's much better if the natural order stays the same and the world can only get better by waiting for the old generations to die off and let the new generations fix their mistakes and eventurally not change for the following generation.

    ..... i have issues

    You have Issues? To me sounds like you have KIDS!!!

    Shane.

  5. QUOTE(ttkx @ Nov 2 2007, 02:16 AM)

    If you want to get "GET DEVICE DESCRIPTOR",you can use setup package by standard request. sure,by CONTROL transfers . I guessed you already creat driver by VISA Driver Development Wizard.

    ttkx,

    Perfect! This is eactly what I was looking for.

    I'll be able to use this as a proper starting point to find out how the USB commands as described in the USB spec are to be handled within LabVIEW.

    Judging by the values you're feeding to the "VISA USB Control in", I have clearly misunderstood some of the parameters within the USB spec.....

    Shane.

  6. QUOTE(Tom Limerkens @ Nov 1 2007, 03:03 PM)

    Thanks Tom, but I've already done that. The articles are very superficial at best. What I need is a SINGLE wired-up example of a standard USB CONTROL communication (i.e. GET DEVICE DESCRIPTOR).

    The example you linked is only for BULK transfer. I need CONTROL transfers to know how to wire up the "VISA USB Control in" and out functions.......

    Shane.

  7. I've been trying to get some basic USB communication working under LV 8.2.1.

    I've created and installed the custom system driver generated by the driver wizard, and everything seems to be OK there.

    I've also downloaded and read up on the USB specification a bit.

    I thought I'd try communicating with the device by performing a "GET_DEVICE_DESCRIPTOR" request, which according to USB specification, the instrument must at all times respond to.

    I have wired up the functions "VISA USB Cpontrol out" and "VISA USB Control in" as I think they should be wired, but it's not working, plain and simple. I'm getting "Invalid buffer mask" errors for the settings I thought were correct for the communication.

    Using a USB sniffer, I see that I am able to provoke the communication, but I'm not able to get any data back. I'm also getting some unexpected error messages.

    I've already posted here on the NI forum.

    Could anyone here please help me out with what surely can't be too difficult a task. I AM a total beginner with USB communication so please go easy on me :P

    Shane.

  8. QUOTE(Justin Goeres @ Oct 17 2007, 06:08 AM)

    Nor am I, but I think that's slightly tangential to the question of how to increase LabVIEW's mainstream acceptance. There are hordes of people (kids and adults) out there who cut their teeth in programming on whatever they can get for free. Heck, I didn't even (directly) pay for my initial LabVIEW exposure years ago -- it was just part of the curriculum in a lab course I taught.

    I don't think LabVIEW will ever be "free" (beer or speech), and FWIW I don't think it necessarily ever needs to be. But the fact is that you can learn a tremendous amount of C/C++/Java/PHP/Perl/god-knows-what-else without shelling out any extra money*. For LabVIEW, the barrier to entry starts in the $1200 range and goes up from there. For those of us who use it as a professional tool, the value is apparent. But for the kind of people who should be being taught to think in G (to coin a phrase), that's a gigantic difference.

    *Beyond the cost of your computer and your internet connection, which are practically like air and water for geeks ;) .

    How 'bout LabVIEW 6.1 for FREE. At least foe Mac or Linux. If you can find a copy of 6.1 which was included on the cover DVD of a certain C't magazine (Germany) you can have windows too.

    Register here.

    Shane

  9. QUOTE(eaolson @ Sep 26 2007, 09:40 PM)

    Yes, that works, but only if the loop adds the same number of elements each time. It wouldn't work for this (pardon for not doing this in LV):

    array a = {};  for (i = 1; i <= 3; i++) {	 for (j = 1; j <= i; j++) {		append j to a;	 }  }

    This creates the array, a = {1, 1, 2, 1, 2, 3}.

    Aaargh, there's some strange enscriptions there. I know they mean something...... Just... Can't... Figure... it.... out.....

    Seriously, it's certainly a difference if the size of each iteration is different...... Good catch.

    Shane.

  10. QUOTE(Justin Goeres @ Sep 26 2007, 02:57 AM)

    Thanks for the link. I've been reasonably happy with Keyspan over the years, but a USB->RS232 adapter is one of those things that sometimes you just need to pony up and buy the really good one, so you can stop worrying about it. Perhaps it's time to step up and get a new one.

    Yup, I'd have to agree with that one. I mentioned it because the Moxa series "should" be really good, but I had some problems with them. They have since released a new line, so perhaps they work better, I dunno.

    If it would be of interest to you, I could try out a sample transmission on one of my adapters (Code?)........ I'd be interested as to how it holds up. I also have a MOXA unit available, so a comparison may be interesting.

    Shane.

  11. QUOTE(Justin Goeres @ Sep 25 2007, 04:58 PM)

    Oh, for Pete's sake... I hate it when I do that. I even used the capital B on purpose, not realizing I was doing the math entirely wrong. :oops: (Problem is still unsolved, but at least I've had my daily dose of reminder that I'm still not the smartest person in the room :headbang: )

    My USB adapter supports this (which leads me to believe the UART is in the adapter), but changing its value had no effect.

    Yep, tried that, too. It really seems to be the hardware buffer that's overrunning.

    It's a Keyspan 19Qi. I'm using driver version 3.2, which is the latest (although it's dated 2002). Unfortunately this is the only USB adapter I have -- but it has performed admirably in many other projects, so I'm fairly trusting of it (and I know how hard it is to find one to trust). However, I don't recall that I've ever used it with flow control.

    Yes, but thanks for asking. Given that I can't reliably multiply and divide by 8, that's a fair question :P .

    It seems that your adapter does no hardware buffering at all. I know there are quite a lot of adapters out there which have a few bytes of buffer on-board. Not much, just enough to stop things like this happening.

    If you keep the Baud high, but lower the frequency of the data transmission (you're doing a simulation, right?), do you receive everything OK?

    If you reduce the packet size to 8 Bytes, do you receive everything OK?

    I don't think software buffers are going to help, because it sounds like the bytes are being overwitten on the Hardware judging by the error message.....

    http://www.exsys.ch/deutsch/produkte/ex_1334.html' target="_blank">Here's a link to the adapter I usually use, I'm pretty happy until now. I tried some Moxa (good reputation) but they bombed on me - on multiple occasions. They apparently don't support software flushing of their buffers! 64 Bytes of buffer. Really not much. I've tried to search for the 19Qi, but I've yet to find anything of use (like Specs). Maybe it's missing the HW buffer......

    Shane.

  12. QUOTE(Justin Goeres @ Sep 25 2007, 03:54 PM)

    I have two computers connected via a null modem cable. One is the received for my serial data (where my application runs), while the other is simulating a sensor I will eventually have to talk to. The sensor continuously broadcasts updated readings (so this is not a query-response situation, it's more like drinking from a firehose). Each message from the sensor is about 20 bytes long, and the sensor wants to send around 500 messages per second (so we're around 10kBps here).

    The serial ports on both computers are set to 115200N81 (although slower speeds still exhibit the problem; see below).

    Here is the problem: Any significant use of the receiving computer (switching applications or even just clicking in the window's title bar causes the receiving computer to return a "VISA Error -1073807252 (0xBFFF006C): An overrun error occurred during transfer. A character was not read from the hardware before the next character arrived." I have replicated the problem with both my laptop (USB->RS232 adapter) on the receiving end and with a desktop machine (old-school RS232 on the motherboard). This happens even with the simplest of serial read/write code (e.g. NI's examples).

    I was able to find a pretty good overview of the meaning of the error in this http://zone.ni.com/devzone/cda/tut/p/id/4052' target="_blank">page on the Developer Zone. Basically, it looks to me like the speed of the data stream is overrunning the capacity of the UART in the receiving computer before Windows comes around to service the IRQ and move the data out of there. I have tried all of the suggestions on the NI page without success (switching to RTS/CTS handshaking seemed to help a little, but the error still occurred). Reducing the baud rate to 9600 (laptop receiver) helped quite a bit, but the error still occurs with moderate abuse of Windows.

    So I'm bringing my problem here. My questions are as follows:

    • Is there any way to mitigate this in software? Given that the sensor is just broadcasting data freely, I can't tell it to just wait for me to catch up.
    • Is this really normal, for the UART on a modern laptop (and a modern desktop) to overrun at only Ten Kilobytes Per Second??? :headbang:
    • Where (probably) is the UART in the laptop system? Is it on the motherboard, or is it in the USB->RS232 adapter? I've consulted all the documentation I have for both, and I can't find a clear answer.

    20 Bytes at 500 Hz is indeed 10 KBps, but Serial transmission is normally lsited as Kbps (note small and large "b"). This, your 10 kBps becomes 80kbps without adding stop bits, parity and so on. Adding a stop bit and a parity bit moves this to 100kbps.

    This should still be below your 112500 kbps serial speed, but maybe it's just too close to the theoretical limit.

    Can you slow down the tranmission to only 250Hz and see if the problem goes away? edit: I see you tried this already.

    Perhaps it's a simple flow control problem? Check a different com port. I have good experience with Exsys USB-RS-232 controllers (prolific Chipset). Cheap and fully-featured.

    Shane.

  13. QUOTE(rolfk @ Sep 23 2007, 08:38 PM)

    The problem with speech recognition is that it is a failry complicated technique to get to work in any useful way. I only played briefly with it in other applications, without trying to import it into LabVIEW and it did not feel up to what I would expect from such a tool.

    It is simply rather complicated to configure and train it appropriately since human perception of speech seems to be such an involved process and as probably anyone who knows more than one language can attest, is also very much depending on the environmental influences where the language is one parameter of it.

    There has been work in speech recognition for more than two decades now with speech recognition technology already available in the Windows 3.1 area and still it hasn't made it to a meaningful means of human interaction with the computer, not to talk about replacing human interface devices like a mouse or keyboard at all. This has been partly because of processing power and memory usage but that can't be the only problem, when you consider that computers have now already 1000 times as much memory as was common 15 years ago and the CPUs run at about 50 times the speed of then and are even more powerful, not to mention the availibility of multicore and multi CPU systems.

    Not having looked at the MS Speech recognition API in a long time I can't really say much about it but it has been already complicated years ago and probably got even more possibilities and features since then.

    Rolf Kalbermatter

    I have to say my first reaction was quite similar to Rolf's.

    However, if the possibility exists to train the system externally, and simply use the speech recognition as input, the complexity should stay within more or less acceptable bounds.

    That said, all the caveats Rolf has mentioned still apply. But if it's needed, then make sure the training and fine-tuning can be done seperately to the LV program itself, otherwise it'll most likely get ugly.

    Just my 2c

    Shane.

  14. QUOTE(tcplomp @ Sep 12 2007, 08:05 PM)

    In fairness, the NI forums didn't create the problem, it only highlighted it. I had some email communication with ben about a year ago on this subject.

    If anyone knows of a method to get the array size of an array via reference, please let us know. My preferred method is casting the "VALUE" to an array of variants, and using the "Array size" on this......

    Or using the OpenG version of stripping it from the variant VALUE string itself.....

    Shane.

  15. QUOTE(blueshrimp @ Aug 21 2007, 11:26 AM)

    Hi there, newbie here.

    I did a search but could not find good answer to my question.

    I have a simple LabView vi that takes in a 2D Array (user defines it, it is a "control" input) called A, an array b, and an initialized (all zeros) array x that is the same size as b.

    Then I am trying to pass this to a C function in a library I'm writing, which will solve Ax=b.

    I do not know in advance the size of the matrix A, but it is assumed to be square (therefore it is the same size as arrays b and x).

    My problem is passing this matrix A into my function. I can pass b and x successfully, but my execution chokes up on A.

    Here's my C function prototype:

    _declspec(dllexport) unsigned char GaussJordan(double **A, double b[], double x[], unsigned long dim);

    However, when I do to "configure" in the dll vi, it keeps generating the prototype:

    unsigned char GaussJordan(double *A, double *b, double *x, unsigned long dim);

    even when I tell it that A should be a 2D array.

    When I stop my debugger right inside my C function call, I see that A[0] is 0, and then A[0][0] "cannot be evaluated".

    When I change my function prototype to:

    _declspec(dllexport) unsigned char GaussJordan(double A[3][3], double b[], double x[], unsigned long dim);

    for instance, then with my debugger I can see inside my GaussJordan an A matrix that makes sense (i.e. is exactly what I passed in). However, this is then a problem because inside GaussJordan I have a function call to:

    solveeqn(double **A, double b[], double[x], unsigned long size)

    Which dies because A[3][3] is not of type double *[3].

    Since I am using solveeqn on matrices of variable size elsewhere on the code, I cannot lock it to the size that labview desires. Not to mention that having to write a different vi to C interface on the day I decide to solve 4x4 matrices instead of 3x3 matrices is too much work.

    So, the question is: how do I pass a variable-sized (assumed square) 2D array from LabView into a C function, if the size of this array is unknown in advance/unknown at compile time??

    Any help will be greatly appreciated!

    Thanks,

    -Elisa.

    I don't know why your 2D array isn't working, but.....

    why don't you try passing it as a 1D array (re-shape the array to a 1D array). Since your array is square, you can re-shape the array within the DLL quite quickly back to a 2D array. You also can have a check to see if the size of the array really is a square number (1,4,9,16,25,36 and so on).

    I might be wrong, but I think the re-allocation of an array from 2D to 1D where the overall number of elements remains constant is a trivial operation in LV, requiring no re-assignment of the data, just the Type descriptor. Again, amybe I'm way off here, but that's my take.

    Shane.

  16. QUOTE(Eugen Graf @ Aug 16 2007, 10:25 PM)

    Hello, can anybody help. I want to get the path to the frontmost VI if mouse was doubleclicked. Probably my solution will work with an EXE, but it don't work while programming.

    Eugen

    On a side note, AFAIK, if the VIs are in memory (which they'll have to be to be front-most :) ) you don't need a full path for the "open" command. Just a string of the VI name will do.

    Why shouldn't this work while programming? I think it should. Ah. You're building the paths from your "App" instance. While programming, this is LabVIEW itself I believe. In an EXE it's your build program. I think you need to switch paths depending on whether you're running a built program or within the development environment. Or you can just get "All VIs in memory" passing only these strings to the "open" command and forget the path altogether.

    Shane.

  17. QUOTE(Dan Press @ Aug 15 2007, 05:37 PM)

    I have been using the pattern shown here for some time now. I have seen many varaitions on this. There is one loop for Events (UI) and another for other processing. I will also usually have a subVI tucked in the upper left corner that runs in parallel and at a different priority. That subVI houses DAQ stuff or other hardware interface code. I still consider this a queued state machine (QSM). The difference between this and the "monster" above is that the data is encapsulated in a cluster and there is an event loop. Still, this has its limitations. I have been guilty of amassing a long list of cases inside both loops!

    http://forums.lavag.org/index.php?act=attach&type=post&id=6630''>http://forums.lavag.org/index.php?act=attach&type=post&id=6630'>http://forums.lavag.org/index.php?act=attach&type=post&id=6630

    Dan, I do something similar, but I have seperate loops for Event handling (one UI loop), one or more "main" loops, sometimes one loop per instrument and an additional "cosmetics" UI loop. I like being able to define cosmetic states of the FP in a seperate structure. I also don't like property nodes stealing valuable screen space in my main loop. I also make a lot of my programs have optional UIs (mostly reacting to whether they're called as a sub-VI or a main VI). These cosmetic changes can then be really easily switched off when the code is running as a sub-VI.

    BTW, I love the "For-Loop" coloured labels. Gives a really nice effect. Consider your style copied from now on!

    Shane.

  18. QUOTE(Ben @ Aug 15 2007, 02:49 PM)

    Here is a screen shot of one of the sub-VI's after scrubbing to protect the inocent. ;)

    http://forums.lavag.org/index.php?act=attach&type=post&id=6627''>http://forums.lavag.org/index.php?act=attach&type=post&id=6627'>http://forums.lavag.org/index.php?act=attach&type=post&id=6627

    This construct was natural for me when I started out and was still thinking about pushing things onto stack and poping them off. It also has a natural dual priority scheduling feature in that if you insert at the head of the queue those states pre-emt others and low priority can put at the back of the queue. When we only had a single thread and were trying to do more than a PC wanted to, these were useful.

    It just strikes me that thinkinga bout pre-empting and priority of execution is falling by the wayside now that we have almost 100X the CPU from just 5 years ago.

    So you still use this type of structure?

    Ben

    OK, I see what u mean now.... I suppose each software architecture, when pushed far enough, will develop some cracks. I wasn't sure you were referring to a single self-feeding loop. I wasn't even aware the term "Queued state machine" referred to this. :unsure:

    I remember implementing something like this back in 1997 or 1998 before I even knew it had a name... Seemed like a good way to save time back then. Still, I had maybe 10 or 20 states, most of them quite simple.

    I actually (I don't know why) try to avoid self-feeding loops. It can be useful in some cases, but as a general rule I avoid them.

    On a side note, am I the only one who uses a seperate UI loop (hiding controls, scaling charts and so on) for their code? I guess I'm trying to prevent my main loop from switching to the UI thread for hiding and showing controls, but I don't know if it's overkill or not.....

    Shane.

  19. QUOTE(Ben @ Aug 14 2007, 10:12 PM)

    After some of my fellow wire workers were looking at a very complicated QSM application I explained that architecture was fine if you have spent your whole life doing assebly programming and you did not want to change the way you design applications.

    With LV's multi-threading, it seems to me that there is no need to use one of those undocumentable monsters.

    So although I have already instructed them to "never develop anything life that" I thought I may want to do a sanity check with Y'all.

    So....

    Are the days of the QSM behind us?

    Please share your thoughts, and I will sit back and learn. :book:

    Ben

    As others have already mentioned...

    Depends on what you mean with QSM.

    I use event-driven queued state machines quite a lot, especially in connection with UIs (otherwise events don't make so much sense) and I find it really useful, so I certainly won't be abandoning it any time soon. I also don't quite understand the problem with QSM and multi-threading. It's quite possible to have a well multi-threaded QSM if you use the right structure. Of course, I'm takling about an architecture with more than just 1 producer and 1 consumer. Goes more towards component programming than anything else, but it's core is still a QSM.

    There are cases where Statemachines are simply not neccessary of course.

    Ben, what do you think they should be replaced with?

    I await your answer eagerly!

    Shane.

  20. QUOTE(Aristos Queue @ Aug 13 2007, 10:52 PM)

    You're talking all data types, not just LV classes, right?

    ......

    I've played around with such concepts, and they get pretty ugly, both on the debugging side and on the UI side. Personally, I don't particularly like open-ended plugable functions (aka operator overloading). .....

    I agree with you 100%. If you refer back to my nugget, you'll see that it's the combination of UI and dynamic dispatch which I was looking for. In the end, I employed an ugly open-end system (based on variants) because I didn't see any other way of doing it. What I would like is somewhere between that and the current system. LVOOP would be perfect if we could place class data directly on the FP for user interaction, but we can't. Hence the idea of having a list of "registered" typedefs for a kind of dynamic dispatching (A Typedefed Control CAN be on the FP). I appreciate that the ability of the compiler to detect wrong data types at compile time is very important, and I'd be delighted to have this functionality.

    I also tried creating a polymorphic VI with different Typedefed (sp?) inputs, but although no error was created with several polymorphic instances with (essentially) the same datatype but different Typedefs, no polymorphism resulted.

    At the moment, "With the dynamic dispatching of classes, I can view the class hierarchy and know whether or not a given class has a given functionality. With arbitrary overloading, I become less sure at every turn whether that "add" primitive that I see on the diagram is actually an add." I find it difficult to fully appreciate the real difference between the two cases you're mentioning here.

    Maybe when I get more familiar with LVOOP things will clear up for me. But I think your example of overloading the "add" primitive is overshooting the mark a tad. The original idea was (when you get down to the bare bones) a convenient way of working with known data structures. I was also trying to demonstrate function overloading, not operator overloading (i.e. Pi() and Pi(2)). In reality, I was actually coming closer to a sort of polymorphism with an extended definition of "Data type". Upon reflection, I have come to realise that the ability to define polymorphic VIs based on their ENTIRE TD would be the right way to go about this. Certainly a long way from arbitrary overloading.

    To quote one of my posts in my nugget thread (spelling mistakes retained for authenticity :wacko: )

    "I've jsut started working my way into classes, and I love "Dynamic dispatch". I'd also love exactly the same thing for Typedefs. I want a control I can place on a Front panel, wire up to a function and to have the function automatically execute a method "registered" for that exact typedef (such as entering that value into a pre-defined cluster). Replace the control with another Typedef, and the code called updates..... How and where the "registration" takes place, I dunno. Maybe even statically within the VI being called, thus providing strict limits to accepted typedefs."

    I think the usage of Variants and the admittedly open-ended implementation of my code has actually distracted from what I actually was trying to create. "Registration" is the key here, otherwise you don't get compile-time error checking.

    So we're agreed on that at least.

    Shane.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.