-
Posts
429 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Mark Yedinak
-
Thanks for the suggestions everyone. Unfortunately simply counting lines is not a solution that will work. In my case I have long strings with no new lines or carriages returns. In addition, if the text wraps in the string indicators what it considers to be the number of lines does not necessary reflect the true number of lines as determined by some end of line character. For example, this paragraph will only have a single end of line character yet within the string indicator it will be seen as multiple lines because of word wrap. If the size of the indicator changes the number of lines also changes. This is a very dynamic number (and somewhat random) in terms of the scroll position. Since I will be updating this display frequently I would like to avoid having to pass it through some line counter VI. I posted an idea in the LabVIEW Idea Exchange which asked for an auto scroll property built in for scrollable items. Hopefully this would include this functionality as well if they chose to implement it. If you have any other suggestions I am open to hearing them. Thanks.
-
I'm trying to create an intelligent string display that supports a scroll bar and auto scrolling. The auto scroll is easy. However, what I would like to be able to do is continue to auto scroll as long as the scroll bar is at the bottom. If the user moves the scroll position to something other than the bottom the automatic scrolling will be disabled. Again, this is easy to accomplish. The challenge though is to know when the user has position the scroll bar at the end again so auto scrolling can continue. My string indicator will allow for a fairly large string (10s of thousands of characters) and can contain binary data including the NULL character. The scroll position property is the line number that will appear in the top of the display. However, there doesn't seem to be any way of determining how many lines there are or how many lines the indicator thinks it has. It still has a concept of lines even if there are no actual carriage returns or line feeds in the data. Has anyone solved this issue? Does anyone have any ideas that may help. I have been struggling with a good way to determine when the user actually moves the scroll bar to the end position. And just to keep this challenging the indicator will be getting automatically updated with new data. I want to allow the user to move the scroll bar and not have the display jump to the bottom again as new data is added. I do however want them to be able to move the scroll bar to the bottom and effectively turn the auto scrolling on again.
-
I've also experienced TCP/IP issues with Windows 7. We haven't fully isolated the issue but an application we have that sends quite a bit of data over TCP/IP in Win7 experiences lots of problems but runs like a charm on XP. I also did some traces of the communications and the traffic pattern in the Win7 cases was very strange from a networking perspective, including unexpected TCP-RSTs.
-
Does some other application have the port open? You can get this error if the port is in use.
-
Just bought my ticket. I'll need a beer at the block diagram party and the BBQ. My presentation is during the last slot on Tuesday.
-
I'll be there.
-
Sorry it hear it did not work out better for you but at least you understand the grade now. I hope you decide to try again. One thing did jump out at me in your last post. You mentioned not being able to use another library to implement your solution but it is important to understand that the CLA exam is not looking for your complete code, but for the architecture to give someone else to implement. You need to design the framework for the application, not the application itself. Therefore you do not need to have the messaging library in order to complete the exam. You could easily have documented that the application requires the messaging library (perhaps even specifying a specific one) and describe how the messaging works. You are not required to actually implement it. This can save considerable time on the exam. Good luck should you try again.
-
Wow, that really sucks. I know from following various discussion where you participated that I think you are definitely qualified to be a CLA. I do have to second Chris's comments though about answering the questions with the answers NI is looking for. Obviously the CLA exam is one where multiple correct answers can be given but sadly you found out the hard way that NI is more interested in specific answers.
-
Congrats!
-
I think as implemented you do have pure dataflow. As stated earlier an node's outputs are available when the node completes. You must have some mechanism for controlling the sequence of operations. This would make debugging extremely difficult and make code harder to understand since the reader would have absolutely no way of understanding the flow of the program. You would never know when you would get partial outputs and when code would start firing. From a programming perspective I believe you need some way to allow the programmer to understand the flow of execution. Sequence structures are already abused. I think if this change were made they would be abused even more.
-
The reason I am strongly suggesting state machines is that they are not that difficult to implement (essentially a case structure inside a while loop) and that both ways you are proposing are considered poor choices in LabVIEW. Better to begin learning the preferred methods than to establish bad habits using the poor choices. It becomes difficult to "unlearn" how to do something.
-
Do not put an event structure inside of an event structure. Simply create the appropriate event cases to handle your events in the single event structure. Disable the automatic indexing on the output from your loops. You are creating arrays of your VISA resource. Also, use shift registers to hold the VISA resource and error cluster in your loop. Wire the values through and connect them to the output tunnel on ALL of the cases of your event structure. Your unwired tunnels are set to the default value which is not a valid VISA resource name.
-
I would recommend against that. Take the time to learn how to use state machines. They are much more flexible and much easier to maintain. I would definitely avoid using sequence frames in any form or fashion. They are not a recommended construct and generally should only be used to impose data flow where none exists. Even then, this should be limited to a single frame with a small bit of code. State machines are not that difficult to learn and they are a very powerful tool.
-
OK, here is my 2 cents. Regarding the comment about wasting developer times and doing things simply because it is quick is NOT the best mindset for solving a problem. You have repeatedly mentioned concerns about future proofing your code so it would seem it is worth your time to design a good, maintainable solution. Quick and dirty doesn't sound like the best approach. While it might work now, it could very likely bite you in the butt later. Spend the time to plan up front. Some quick questions I though of which may help you decide the best solution are: Will this forever be a LabVIEW only solution? If yes, variant or flatten to string will work. If there is any chance you may have these messages come or go to an application written in another language then don't use ANY native LabVIEW type. The basic tuple style message suggested earlier is probably the most flexible and will easily work in other languages. What is the reason to select human readable? If it is simply because it is generic it is only beneficial if you will actually need a human to read it. If only machines need to deal with the data use a format that is machine friendly and save the effort of translation. Given the track history of National Instruments and maintaining the format for variants/flatten to string do you really need to be that concerned about using that format? The small likely hood of this happening can be dealt with in the future if necessary. My personal recommendation would be to define a generic, language agnostic format. This gives you the greatest flexibility and allows clients written in other languages to easily be used.
-
Coming in late to the discussion I have to side with John. While JG's approach is clean, it is not decoupled. Simply look at your classes in the processing tasks. They are called UI methods. In a truly decoupled system the processing tasks should have absolutely no concept of a UI. It manipulates data, controls devices, reads data, etc. All of the processing tasks should be able to be added to an application that has absolutely no UI at all and is machine driven. As John mentioned the purest form would use raw TCP or some defined standard (TCP based most likely) that simply passes messages. In this manner the UI is free to change how the data is represented, stored or thrown away. Even using JG's suggestion about overloading the class with the specific implementation implies the lower level processing tasks are modified. It may occur in the form of a plugin but it's internals need to be changed. In john's approach you would never have to touch the processing code unless you wanted extend what messages your were passing. However even this API should be general and flexible enough to allow this type of change with minimal effort.
-
Check for data on TCP connection
Mark Yedinak replied to mike5's topic in Remote Control, Monitoring and the Internet
Ned's suggestion is very good and a great way to handle this. If that doesn't meet your needs you could also check to see if data is available by reading a single byte of data using a short timeout. If no data is there then move on and do other things. If data is present then go to your read state and read more data. In that state you could use a longer timeout as well as look for whatever termination character you use. Your state machine would have to periodically check to see if data were available. However, it sounds like you will read the data if there is some so the single byte read would be one way of checking if data is there before doing more processing. Naturally you would have to buffer the byte you read so it will be processed with the other data. -
You can use the same measure using the number of defect free VIs/Total number of Vis. However, there are many metrics you could use. This is true even for traditional programming languages. You might want to look at the VI Analyzer toolkit. This can give you various metrics for your code. These can be used to measure the quality of your code.
-
PBKDF2 implementation in LabVIEW?
Mark Yedinak replied to Mark Yedinak's topic in Remote Control, Monitoring and the Internet
Any clue how I can use this in LabVIEW? This call is in the .NET 4 framework. I installed that version of .NET but I don't see it as an option when configuring the Invoke Node in LabVIEW. I don't see any of the .NET 4 calls at all. And yes, I rebooted the computer. -
Does anyone know of a PBKDF2 implementation in LabVIEW or know of a version that can be called from LabVIEW? I am doing testing of WiFi securities and it would be helpful if I could generate the keys given the pass phrase. At the moment I have to do this manually via web sites. I would like to be able to do this automatically in the code itself.
-
Thanks, I will take a look at it. I did get the functionality I needed by copying and modifying the TCP_NoDelay VI that has been referenced. Since I am writing applications to test network stacks on other devices I need to delve much deeper into TCP than what the native LabVIEW functionality provides. On a related note, is it possible to generate packets that are not TCP or UDP based via LabVIEW? For instance, if I want to generate ICMP or ARP packets or write a RARP (yes, I know it is an arcane and antiquated protocol) server in LabVIEW.
-
Alternative to FGV & classes
Mark Yedinak replied to MartinMcD's topic in Object-Oriented Programming
I realize I'm chiming in on this conversation a bit late but I wanted to reiterate what Paul was saying. It is fairly easy to define a generic messaging architecture that can pass messages around your application and provide lots of useful features without regard to the messages themselves. As Paul stated, only the sender and the receiver need to know what is in the message and how to interpret it. Classes that are used for passing the messages can be generic such as an abstract class. By using this approach you have a standard interface for message handling that is flexible and reusable. -
Yes, the code comments are not correct. The posted code was a quick experiment. I copied the code for the TCP_NoDelay.vi which was referenced above and I simply modified the Call Library Node configuration. I didn't take the time to update the comments. I am trying to call the getsocketopt() function. I did wire the output to a 32-bit integer. This should be enough space for the LINGER structure which is 32 bits in length. When I run this VI I always get an error returned. I have successfully called the setsocketopt() function to set the LINGER options. However I am not sure why the call to getsocketopt() is failing every time. I have experimented with wiring something to every input parameter, not wiring anything to input parameters for arguments that are outputs, variations as to the type of data I wire, etc.