-
Posts
4,883 -
Joined
-
Days Won
296
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
Rolf's given you the technical reasons and history. So ignoring cross-platform (big one for me, especially as I'm now moving away from Windows) , performance and falling to pieces when IT push out a security updates. Here are some real world, LabVIEW specific examples. Thread Starvation Obsolecence. Deadlocks.(see note at bottom) Like Rolf says. When they work-fine. When they don't; they are self contained bundles of nightmares that you can only remove (if you can get into the IDE ). I just prefer not to put them in in the first place.
-
are Boolean expression stopped like they would be in C ?
ShaunR replied to ArjanWiskerke's topic in LabVIEW General
Memory [re]allocations in diagrams are expensive and Boolean operator performance is trivial in comparison?. If you have to worry about a few nanoseconds of fundamental operations, then a general purpose desktop is not the platform to use? -
VISTA - Professional Software Engineering Tools for LabVIEW
ShaunR replied to crelf's topic in User Interface
I think you should. This thread prompted me to hunt around to see if I could find a Eurotherm Controls serial driver (Modbus and EI-Bisynch) that I wrote thousands of years ago It was the first driver I ever wrote and was my first public submission to the world. I think I even submitted it to the NI drivers library but it's not there now - there is a newer one there now without the EI-Bysinch. I guess it was my first vapourware : -
VISTA - Professional Software Engineering Tools for LabVIEW
ShaunR replied to crelf's topic in User Interface
​That's really sad. It probably means that it wasn't modular enough to be re-purposed or adapt to changing requirements. I expect your software has come a long way since then. I think it's lying that reuse is 98% though I must admit I don't use the Gateway thing. It may have changed since I last used it, some 5 years ago but it was more of a trace-ability function which isn't all that helpful for day-to-day programming and customers are only really interested in acceptance tests. As in internal tool I can see it might have benefits, but it doesn't provide what I need on a say-to-day basis. Like your tool, I have software that does the bits of it that are interesting such as coverage but I don't care about complexity or pretty graphs or a myriad of other metrics that are just numbers to "ooh" and "ahh" over. I care about broken VIs, orphans, re-entrant or not etc that affect the performance and operation rather than the accountancy.. Complexity or nesting depth is a poor indication of that. However, the tool that I've had for several years now is plug-able and extensible so it hasn't suffered the same fate even though it started out as pretty much the same sort of tool. In fact, 6 months ago it had a face-lift from the standard millennial white, system front, panel to a moody dark which is all the rage now ​ ​ -
Eliminate that and see if it goes away (simulate the data if you have to). There is a very good reason why ,NET (and it's grandpapy, ActiveX) is banned from all my LabVIEW projects and this stinks of ,NET threading and LabVIEW root loop.
-
Actually there are company overheads. your mortgage, car payments, medical insurance, food and so on. These are taken into account (I hope) when you decide on your hourly rate. If this is your first foray into contracting, I highly recommend quoting on a time and expenses basis and providing a preliminary project plan with deliverables and milestones for them to sign off. Fixed price quoting is very dangerous for the inexperienced. If you are fixed price quoting then your "profit" is on the hardware and services as a value added supplier (usually 5-10% of cost), and your hourly rate which you make up based on how much you think you need to live your lifestyle and how much the market can bear. There is no overall "project profit percentage". When you negotiate, you negotiate less time and less hardware provisioning until you and the customer agree to the minimum effort and hardware supplies (from you) required to do the task. If that is still not good enough then your extra "margin" is on the hardware so you can go from, say, 10% to 5% If that is still not good enough - walk away because you aren't making any money!
-
are Boolean expression stopped like they would be in C ?
ShaunR replied to ArjanWiskerke's topic in LabVIEW General
​ ​Yes - fundamentally impossible from your perspective LabVIEW (or more specifically, G) isn't an interpreted language like Javascript or Python and it can evaluate the components of your expression in parallel, unlike procedural languages. So "b OR c" might be evaluated 10 hrs before "a" since an expression, or a component of an expression, is evaluated when the data is available, not when a program pointer steps to the next line.- that's dataflow for you ​ -
​ ​Indeed. However I would suggest using a key file. If you already have SSH access (which I presume you do) then you just point Filezilla to the private key file You can then disable username/password authentication completely on the server.. This is far more secure but difficulties arise with many (hundreds) of users. However having many users requiring SSH access is not a usual use-case in LabVIEW.
-
​Not having played with a myRIO or other Linux RT (yet!) so I might be off track. But if it has SSH then it probably has SFTP Edit: Sometime later​, after searching to see if there was a wmware image of the NI Linux real-time somewhere (silly fool for even thinking this might be a thing), I came across the Linux RT whitepaper. . ​So there ya go. ------------------- ​Incidentally. It seems that the document also states that FTP was removed (not included?) in Linux RT in LabVIEW 2013 (or maybe that was just when they added webdav) ​ ​[Except the SFTP, of course!]
-
​I think that has been superseded by sc.exe (comand line) and Srvinstw.exe (GUI version) which means you don't have to hack the registry any more.
-
I can find no statement to that effect. (I don't have a 2014 RIO; only 2013). This was published on Oct 16, 2015 so hopefully it is just an oversight as it references a 2011 document.
-
There is an unsecured FTP server on RT devices that is enabled by default. Do people turn this off when deploying as a habit or procedure?
-
I was just checking to see if it was still the main repository. Can we put a link on the CR page?
-
Is this no longer on Bitbucket? Where is the repository now?
-
Point and click FTW. Typing is so 1980s Seeing as you went for a Rube Golberg footswitch solution , maybe it would be fun to try a head tracking one? There's some home-made ones around as well as off-the shelf that are used for gaming. You only need 3 LEDs and a webcam. There is something very appealing about the thought of using a nod as "Return" and a head-shake for "Escape". You could just look up slightly to put the caps lock on, maybe?
-
Am I right in thinking you are a full fingered typist? i.e. you type with all fingers like a secretary with hand position as static as possible? To activate, say CTRL+R you would then twist your hand and separate the pinky? I was looking at how I type. I'm kind of a 3 fingered (neanderthal ) typist (index, middle and thumb for space bar). This means that when I operate the control key I move my whole hand and my pinky is bent downwards/inwards but parallel to the others. So my pivot point is the elbow controlled by the biceps and shoulders rather than the wrist. I wonder if it would benefit you to just swap the ALT and CTRL keys to bring it closer to your hand position?
-
Got for it Hmm. The version in the CR is later than in the repository. Maybe drjdpowell is building off of his own branch?
-
Not at all. Just saying it's been discussed before and there was no consensus. People were more worried about NULL
-
It was discussed in the past. The problem is that the JSON spec forbids it specifically.
-
Outside of UX are there any known issues with things like URL mapping to the LabVIEW code (sanitation and URL length) or the API keys (poor keyspace)? Are LabVIEW developers even aware that they may need to sanitise and sanity check URL parameters that map to front panel controls?(especially strings)
-
As an addendum. Do you have specific issues with using the NI webserver on public networks? Can you detail the specifics of why it is "woefully incapable" for us?
-
Setting the receive buffer isn't a help in the instance of rate limiting. There is a limit to how many packets can be buffered and they stack up. If you send faster than you can consume (by that I mean read them out through LabVIEW), you hit a limit and the TCPIP stops accepting - in many scenarios, never to return again or locking your application for a considerable time after they cease sending. For SSL, the packets (or records) are limited to 16KB so the buffer size makes no difference. Therfore it is not a solution at all in that case but that is probably outside the scope of this conversation but does demonstrate it is not a panacea. I'm not saying that setting the low level TCPIP buffer is not useful. On the contrary, it is required for performance. However. Allowing the user to choose a different strategy rather than "just go deaf when it's too much" is a more amenable approach. For example. It gives the user an opportunity to inspect the incoming data and filter out real commands so that although your application is working real hard to just service the packets, your application is still responding to your commands. As for the rest. Rolf says it better than I. I don't think the scope is bigger. It is just a move up from the simplified TCPIP reads that we have relied on for so long. Time to get with the program that other languages' comms APIs did 15 years ago I had to implement it for the SSL (which doesn't use the LabVIEW primitives) and have implemented it in the Websocket API for LabVIEW. I am now considering also putting it in transport.lvlib. However. No-one cares about security, it seems, and probably even less use transport.lvlib so its very much a "no action required",in that case. Funnily enough. It is your influence that prompted my preference to solve as much as possible in LabVIEW so in a way I learnt from the master but probably not the lesson that was taught As to performance. I'm ambivalent bordering on "meh". Processor, memory and threading all affect TCPIP performance which is why if you are truly performance oriented you may go to an FPGA. You won't get the same performance from a cRIOs TCPIP stack as even an old laptop and that assumes it comes with more than a 100Mb port. Then you have all the NAGLE, keep-alive etc that affects what your definition of performance actually is. Obviously it is something I've looked at and the overhead is a few microseconds for binary and a few 10s of microseconds for CRLF on my test machines. It's not as if I'm using IMMEDIATE mode and processing a byte at a time
-
Wall street maybe?
-
I want to secure myself against you. your dog, your friends, your company. your government and your negligence. (Obviously not you personally, just to clarify ). How much effort I put into that depends on how much value I put on a system, the data it produces/consumes, the consequences of compromising its integrity and the expected expertise the adversary has. If I (or my company/client) don't value a system or the information it contains then there is no need to do anything. If my company/client says it's valuable, then I can change some parameters and protect it more without having to make architectural concessions. The initial effort is high. The continued effort is low. I have the tools, don't you want them too? When your machine can take arms or legs off or can destroy thousands of pounds worth of equipment when it malfunctions. People very quickly make it your problem "Not my problem" is the sort of attitude I take IT departments to task with and I don't really want to group you with those reprobates Your last comment sentence is a fair one, though. However. Web services are slow and come with their own issues. By simply deploying a web server you are greatly increasing the attack surface as they have to be all things to all people and are only really designed for serving web pages If you are resigned to never writing some software that uses Websockets, RTSP or other high performance streaming protocols then you can probably make do. I prefer to not make do but make it do - because I can. Some are users of technology and some are creators.Sometimes the former have difficulty in understanding the motives of the latter but everyone befits.
-
The issue is the LabVIEW implementation (in particular, the STANDARD or BUFFERED modes). The actual socket implementations are not the issue; it is the transition from the underlying OS to LabVIEW where these issues arise. Standard mode waits there until the underlying layer gets all 2G then dies when returning from the LabVIEW primitive. The solution is to use immediate mode to consume at whatever size chunks the OS throws it up to LabVIEW at. At this point you have to implement your own concatenation to maintain the same behavior and convenience that STANDARD gives you which requires a buffer (either in your TCPIP API or in your application). When you have consigned yourself to the complexity increase of buffering within LabVIEW, the possibility to mitigate becomes available as you can now drop chunks either because its coming in too fast for you to consume or because the concatenated size will consume all your memory. I'd be interested in other solutions but yes, it is useful (and effective). The contrived example and more sophisticated attempts are ineffective with (1). If you then put the read VI in a loop with a string concatenate in your application; there is not much we can do about that as the string concatenate will fail at 2GB on a 32 bit system. So it is better to trust the VI if you are only expecting command/response messages and set the LabVIEW read buffer limit appropriately (who sends 100MB command response messages? ) In a way you are allowing the user to be able to handle it easily at the application layer with a drop-in replacement of the read primitive, It just also has a safer default setting. If a client maliciously throws thousands of such packets then (2) comes into play. Then it is an arms race of how fast you can drop packets against how fast they can throw them at you. If you are on a 1Gb connection, my money is on the application You may be DOS'd but when they stop; your application is still standing. As said previously. It is a LabVIEW limitation. So effectively this proposal does give you the option to increase the buffer after connection (from the applications viewpoint). However, there is no option/setting in any of the OS that I know of to drop packets or buffer contents. You must do that by consuming them. There is no point in setting your buffer to 2G if you only have 500MB of memory.