Jump to content

Mark Yedinak

Members
  • Posts

    429
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Mark Yedinak

  1. Thanks for the suggestions everyone. Unfortunately simply counting lines is not a solution that will work. In my case I have long strings with no new lines or carriages returns. In addition, if the text wraps in the string indicators what it considers to be the number of lines does not necessary reflect the true number of lines as determined by some end of line character. For example, this paragraph will only have a single end of line character yet within the string indicator it will be seen as multiple lines because of word wrap. If the size of the indicator changes the number of lines also changes. This is a very dynamic number (and somewhat random) in terms of the scroll position. Since I will be updating this display frequently I would like to avoid having to pass it through some line counter VI.

    I posted an idea in the LabVIEW Idea Exchange which asked for an auto scroll property built in for scrollable items. Hopefully this would include this functionality as well if they chose to implement it.

    If you have any other suggestions I am open to hearing them.

    Thanks.

  2. I'm trying to create an intelligent string display that supports a scroll bar and auto scrolling. The auto scroll is easy. However, what I would like to be able to do is continue to auto scroll as long as the scroll bar is at the bottom. If the user moves the scroll position to something other than the bottom the automatic scrolling will be disabled. Again, this is easy to accomplish. The challenge though is to know when the user has position the scroll bar at the end again so auto scrolling can continue. My string indicator will allow for a fairly large string (10s of thousands of characters) and can contain binary data including the NULL character. The scroll position property is the line number that will appear in the top of the display. However, there doesn't seem to be any way of determining how many lines there are or how many lines the indicator thinks it has. It still has a concept of lines even if there are no actual carriage returns or line feeds in the data.

    Has anyone solved this issue? Does anyone have any ideas that may help. I have been struggling with a good way to determine when the user actually moves the scroll bar to the end position. And just to keep this challenging the indicator will be getting automatically updated with new data. I want to allow the user to move the scroll bar and not have the display jump to the bottom again as new data is added. I do however want them to be able to move the scroll bar to the bottom and effectively turn the auto scrolling on again.

  3. I'm the one setting up the ramdrive and had originally hoped to use ramdrive.sys. :rolleyes: The Unix guys got a little glassy-eyed about it until I told them not to worry -- it just looks like another disk to their code. The whole ramdrive thing was a step into the wayback machine for me. I haven't used one since my BASIC/MS-DOS days. I'm using RamDisk and so far it seems to be working.

    Disabling Nagle's algorithm is something I haven't tried yet, but only because it's supposed to be for optimizing small data packets, and 5MB isn't very small. But hey, I've tried everything else, I can try that, too.

    I've also experienced TCP/IP issues with Windows 7. We haven't fully isolated the issue but an application we have that sends quite a bit of data over TCP/IP in Win7 experiences lots of problems but runs like a charm on XP. I also did some traces of the communications and the traffic pattern in the Win7 cases was very strange from a networking perspective, including unexpected TCP-RSTs.

    • Like 1
  4. Yes, and yes.

    Seriously though, there just aren't that many places within walking distance of the ACC that could accomodate us. The general consensus at last year's BBQ (which may have been fuelled by beer) was that it was good and that we should have it there again this year.

    I'll be there.

  5. Sorry it hear it did not work out better for you but at least you understand the grade now. I hope you decide to try again. One thing did jump out at me in your last post. You mentioned not being able to use another library to implement your solution but it is important to understand that the CLA exam is not looking for your complete code, but for the architecture to give someone else to implement. You need to design the framework for the application, not the application itself. Therefore you do not need to have the messaging library in order to complete the exam. You could easily have documented that the application requires the messaging library (perhaps even specifying a specific one) and describe how the messaging works. You are not required to actually implement it. This can save considerable time on the exam.

    Good luck should you try again.

  6. Wow, that really sucks. I know from following various discussion where you participated that I think you are definitely qualified to be a CLA. I do have to second Chris's comments though about answering the questions with the answers NI is looking for. Obviously the CLA exam is one where multiple correct answers can be given but sadly you found out the hard way that NI is more interested in specific answers.

  7. I think as implemented you do have pure dataflow. As stated earlier an node's outputs are available when the node completes. You must have some mechanism for controlling the sequence of operations. This would make debugging extremely difficult and make code harder to understand since the reader would have absolutely no way of understanding the flow of the program. You would never know when you would get partial outputs and when code would start firing. From a programming perspective I believe you need some way to allow the programmer to understand the flow of execution. Sequence structures are already abused. I think if this change were made they would be abused even more.

  8. Thanks Mark,

    at tha moment I find easier to use one of the two solution posted, but I'm studying in order to implement the state machine solution asap.

    Bye

    Max

    The reason I am strongly suggesting state machines is that they are not that difficult to implement (essentially a case structure inside a while loop) and that both ways you are proposing are considered poor choices in LabVIEW. Better to begin learning the preferred methods than to establish bad habits using the poor choices. It becomes difficult to "unlearn" how to do something.

  9. Now, I changed my total code to event structures. Though, I have some errors and couldn't solve it.

    First problem is I can't wire to close visa and it arises error.

    In second event case currently I am writing command like 1088(here 10 is address) but I want to make if first case runs and displays all the available addresses user can select the any address and it should replace the existing address and write to port.

    In Third event case, I would like to write user entered data to port by appending the data to existing command in String concatenation. In this case also selected address should replace the existing address always.

    Please any one should suggest me. Thank you.

    Do not put an event structure inside of an event structure. Simply create the appropriate event cases to handle your events in the single event structure.

    Disable the automatic indexing on the output from your loops. You are creating arrays of your VISA resource. Also, use shift registers to hold the VISA resource and error cluster in your loop. Wire the values through and connect them to the output tunnel on ALL of the cases of your event structure. Your unwired tunnels are set to the default value which is not a valid VISA resource name.

  10. Thanks, this was my goal.

    I will choose the first solution with the event structure because of I'm not so familiar with the state machines.

    Max

    I would recommend against that. Take the time to learn how to use state machines. They are much more flexible and much easier to maintain. I would definitely avoid using sequence frames in any form or fashion. They are not a recommended construct and generally should only be used to impose data flow where none exists. Even then, this should be limited to a single frame with a small bit of code. State machines are not that difficult to learn and they are a very powerful tool.

  11. OK, here is my 2 cents. Regarding the comment about wasting developer times and doing things simply because it is quick is NOT the best mindset for solving a problem. You have repeatedly mentioned concerns about future proofing your code so it would seem it is worth your time to design a good, maintainable solution. Quick and dirty doesn't sound like the best approach. While it might work now, it could very likely bite you in the butt later. Spend the time to plan up front.

    Some quick questions I though of which may help you decide the best solution are:

    Will this forever be a LabVIEW only solution?

    If yes, variant or flatten to string will work. If there is any chance you may have these messages come or go to an application written in another language then don't use ANY native LabVIEW type. The basic tuple style message suggested earlier is probably the most flexible and will easily work in other languages.

    What is the reason to select human readable?

    If it is simply because it is generic it is only beneficial if you will actually need a human to read it. If only machines need to deal with the data use a format that is machine friendly and save the effort of translation.

    Given the track history of National Instruments and maintaining the format for variants/flatten to string do you really need to be that concerned about using that format? The small likely hood of this happening can be dealt with in the future if necessary.

    My personal recommendation would be to define a generic, language agnostic format. This gives you the greatest flexibility and allows clients written in other languages to easily be used.

  12. Coming in late to the discussion I have to side with John. While JG's approach is clean, it is not decoupled. Simply look at your classes in the processing tasks. They are called UI methods. In a truly decoupled system the processing tasks should have absolutely no concept of a UI. It manipulates data, controls devices, reads data, etc. All of the processing tasks should be able to be added to an application that has absolutely no UI at all and is machine driven. As John mentioned the purest form would use raw TCP or some defined standard (TCP based most likely) that simply passes messages. In this manner the UI is free to change how the data is represented, stored or thrown away. Even using JG's suggestion about overloading the class with the specific implementation implies the lower level processing tasks are modified. It may occur in the form of a plugin but it's internals need to be changed. In john's approach you would never have to touch the processing code unless you wanted extend what messages your were passing. However even this API should be general and flexible enough to allow this type of change with minimal effort.

  13. Ok, I see that I will have to elaborate. I mistakenly thought my question was clear enough.

    I have actually a TCP connection open in LabView, but I would not like to read from it, I would like to call some VI that would block (with the timeout set by me) until data is available. Then I will call another VI that will actually read from the connection and do something with the data.

    Or to be specific - I am using STM, but STM only has one parameter. The timeout can be set when reading a message, but unfortunately this timeout isn't used only when waiting for the data. It also seems to be used internally. The effect is, that if I set the STM timeout too short (lets say 100ms), then STM stops working because the timeout is too short to compose the entire message. So the timeout needs to be larger, about 1000ms. But I don't want to wait for 1 second for a new message. I want to wait for it just for 100ms. If it arrives, I will read it, if not, I will do other things.

    And since I don't want to modify the STM library and break compatibility with NI, I decided to look for some other function that would wait for new data without removing it from the connection. If the data comes, then STM will be used with its default settings to read the message.

    I hope my intention is clearer now.

    Thank you for any pointers, Mike

    Ned's suggestion is very good and a great way to handle this. If that doesn't meet your needs you could also check to see if data is available by reading a single byte of data using a short timeout. If no data is there then move on and do other things. If data is present then go to your read state and read more data. In that state you could use a longer timeout as well as look for whatever termination character you use. Your state machine would have to periodically check to see if data were available. However, it sounds like you will read the data if there is some so the single byte read would be one way of checking if data is there before doing more processing. Naturally you would have to buffer the byte you read so it will be processed with the other data.

  14. You can use the same measure using the number of defect free VIs/Total number of Vis. However, there are many metrics you could use. This is true even for traditional programming languages. You might want to look at the VI Analyzer toolkit. This can give you various metrics for your code. These can be used to measure the quality of your code.

  15. I'm a little late to this thread, but I have a complete project (except the DNS lookup) around the WinSock functions so LabVIEW can support IPv6 in the code repository

    http://lavag.org/fil...ls-for-labview/

    This could be a useful reference if you're looking at how to get to low-level socket functions. It's VisualC++ instead of .NET.

    Mark

    Thanks, I will take a look at it. I did get the functionality I needed by copying and modifying the TCP_NoDelay VI that has been referenced. Since I am writing applications to test network stacks on other devices I need to delve much deeper into TCP than what the native LabVIEW functionality provides.

    On a related note, is it possible to generate packets that are not TCP or UDP based via LabVIEW? For instance, if I want to generate ICMP or ARP packets or write a RARP (yes, I know it is an arcane and antiquated protocol) server in LabVIEW.

  16. If we choose to use signals we can send any type of data we want, but we should define the simplest external interface possible for a component (to make the intent clear and to minimize the need for changes by decreasing coupling)--and use an appropriate messaging paradigm. (Very important: The message content and the messaging paradigm can and should be independent of one another! We should be able to change one or the other as needed!) We can send any data by flattening it (to a variant or a string or XML) but the receiver needs to know the type in order to interpret the message. To achieve that we can use one type per message or, send the type as part of the message (can get complicated), or, if we are using objects, make the top-level type of a single message an abstract class and use dynamic dispatching, extending the abstract class for each actual message type as needed (which is the essence of the Command Pattern). (In this case a sender or a receiver needs to have access to the definitions precisely for the objects it sends and receives, but not for other messages. This is the essence of defining component interfaces.)

    Paul

    I realize I'm chiming in on this conversation a bit late but I wanted to reiterate what Paul was saying. It is fairly easy to define a generic messaging architecture that can pass messages around your application and provide lots of useful features without regard to the messages themselves. As Paul stated, only the sender and the receiver need to know what is in the message and how to interpret it. Classes that are used for passing the messages can be generic such as an abstract class. By using this approach you have a standard interface for message handling that is flexible and reusable.

  17. The comments in your code, and the function prototype, are set up to call setsockopt, but the function you're calling is getsockopt. Which function do you mean to call? Also, it appears that if you're trying to set or get SO_LINGER, you need to pass a pointer to an appropriate LINGER structure (a cluster of two U16 values) as the value argument, or an equivalent array. To be safe I'd make sure that you do wire a value into any pointer input, even if it's actually an output, although LabVIEW might handle that for you.

    Yes, the code comments are not correct. The posted code was a quick experiment. I copied the code for the TCP_NoDelay.vi which was referenced above and I simply modified the Call Library Node configuration. I didn't take the time to update the comments. I am trying to call the getsocketopt() function. I did wire the output to a 32-bit integer. This should be enough space for the LINGER structure which is 32 bits in length. When I run this VI I always get an error returned.

    I have successfully called the setsocketopt() function to set the LINGER options. However I am not sure why the call to getsocketopt() is failing every time. I have experimented with wiring something to every input parameter, not wiring anything to input parameters for arguments that are outputs, variations as to the type of data I wire, etc.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.