Jump to content

ShaunR

Members
  • Posts

    4,914
  • Joined

  • Days Won

    301

Everything posted by ShaunR

  1. <mounts soap-box> Indeed. They are all amoral (as an entity). There is no conscience. No guilt. No compassion. No moral obligation to "do the right thing" or indeed any sort of social contract toward community outside those imposed by law. Whilst the beings that constitute the corporate entity may have all these;in itself, it has only one objective - to increase it's market share towards a monopoly via the vehicle of profit. The Victorians knew this. That is why they made a definitive distinction between corporate and social. The banks were the guardians of corporate. The government, the guardian of the social. The govt kept the banks in line with laws, and the people kept the government in line with votes. What happens when corporate merges with government? Either the corporations MAKE the laws and your/our vote is worth as much as chocolate fire-guard, or we all start calling each other "comrade" and espouse the merits of the mother-land. <climbs back off the soap-box>
  2. The technology to sort and segregate trash has been around a long time (even recycling tyers). But why spend shareholders investments on machines when you can get the peasants to do it for free
  3. I'm not quite with you on this one.The storage would be a part of the calculator AE and there would be no "pesudo FGV". FGV stands for "Functional Global Variable and the functions would be +/- which is why I don't really discriminate between a "Get/Set FGV" and an AE. As long as the functions are atomic, then the +/- FGV would complete its operation (read the value, operate on it, then output) and the other operation could not read the stored value whilst this is happening (as it can with Globals). The race condition that FGVs address is that the value cannot be read in other parts of the code until the current function has completed. So in your example, the result of 2+2-1 (with two VIs in parallel) will always be 3 once both have been executed. With a global it could be 5. Your example, however, uses functions which are commutative. So when you lay down your Calc VIs and do an add and subtract in parallel. You cannot guarantee which of the VIs will be executed first by LabVIEW, but, after they both have executed, the answer will be correct. If the order is important (i.e. the operations are not commutative) then you have a different type of race condition that has nothing to do with access timing to the underlying stored value (which is what globals suffer from). However. unlike globals, FGVs have an error terminal, so if it sequence is important, they can be sequenced via the terminals (e.g add then subtract).
  4. UDP was quite extensively covered recently here. You can also download transport.lvlib which handles variable length UDP datagrams.
  5. The singleton pattern is a method to resolve a common problem associated with OOP when a single object is required to be instantiated and referenced rather than a new instance of an object (the default OOP instantiation-Grandma loves sucking eggs). Native LV does the opposite by default (a VI is a singleton - a single object referenced wherever it is placed - system wide). No design pattern is required as it is implicit to the language. If you don't want this behaviour, then it can be set to "re-entrant". This aspect, however is a side-show when talking about FGVs vs Globals. Where the differences between FGVs, globals really lie is in "State" not "Data". In non-data flow languages state has to be managed and an answer to the icky state management problem was OOP. In the dataflow paradigm, state is implicit in the language. However. Sometimes state managed by the [LabVIEW] language is not sufficient or appropriate. So, when it is advantageous to do so, we specifically design an object to store state (a FGV). The "get/set version of pseudo FGV" [sic] or "Action Engine" is the native labview method of replicating OOP style objects where you encapsulate the state of the data and manipulate it with methods. Global variables cannot maintain state (only data). Neither can they be "sequenced" to maintain state via dataflow. This is the advantage of FGV over globals. Singleton behaviour is just the language specific being taken advantage of.
  6. Well. There was a Windows Kernel update just shy of 1 month ago, but no virus scanner update for 2 months. I noticed that windows defender AND the the separate virus scanners real-time scanning is enabled on all of them. So I've turned off "real-time" checking and Defender on the PCs temporarily to see if it goes away. There are various murmurs around the internet on this subject and the usual response is "just live with it" (not acceptable for me!).
  7. Nicely explained by Rolf (including the caveats). The way I "protect" myself from getting pushed into the corner is to have a few choice buttons or a keyboard combination that brings up a password protected interface (can either be an "Admin" area or just completely hidden until a certain key combination is pressed). Once in, I usually give the options to return to the windows shell (requires reboot), start an explorer window (only way to look at files) and re-enable CTRL+ALT+DLT (last two don't require rebooting).. Just booting into LabVIEW as the shell will usually flummox most operators/users. Disabling CTRL+ALT+DEL means that you can't get to task manager, from which, you can use the run command.
  8. My favourite trick is booting into the LV program as the shell and disabling CTRL+ALT+DEL .
  9. Completely dissimilar machines. One has an 450GB SSD (laptop), one has a 500GB seagate (desktop), one has a 750GB Western Digital (Desktop) one has a 120GB WesternDigital (laptop). Mixtures of Win7/LabVIEW x32 and x64 and I7, I5 and Dual Core Pentiums (dissimilar hardware). I have a suspicion it is a Windows update or maybe virus scanner update (since all LabVIEW installations are over 8 months old)
  10. Recently (I would say in the last month or so) I've been getting this strange error from the IDE when opening projects and vis from the labview "Getting Started" screen (from any drive including C:\). It doesn't happen every time (maybe 1 in 20) and after pressing "retry" it will load the file. It happens in all LV versions and on different machines. I know there were murmurs a while back about seeing this in LV 2010 (with no resolution from what I could tell) but like I said. One of the machines has a 2 year old copy of LV2009 on it (and that's been rock-solid until now). Anyone else experiencing this or knows why this happens?
  11. Your signal is 20ms (50Hz). That's mains leakage (either in your source or in the USB supply). Change the source to a battery and see if it goes away. If it doesn't, send back the unit and get a replacement.
  12. Oh, I don't know. Quite possible with Labview distributions I was talking more in context with IEC61850 rather than the OPs requirement (but I can see that I wasn't clear on that). It seems, from a casual perusal ,that it (IEC61850) is targeting similar requirements to Ethercat (the 4ms deterministic messaging - Fast Transfer of events requirement for example). In this scenario, I can see UDP multicast being very useful.and much easier for a device to "register" for messaging or "listening" for status events rather than the controller opening specific connections (and the overheads in the controller associated with copying messages to multiple clients).
  13. There is a VI in the "\vi.lib\utilities\lvdir.llb" called "Get Base Labview Directory.vi". With it you can get the common/user locations that labview uses (like "C:\Users\<user_name>\Documents\LabVIEW Data"
  14. Indeed. However that "tried and proven" doesn't extend to multicast and a control/monitoring system could definitely take advantage of that. That said. This type of requirement is what Ethercat is for.
  15. From what I can tell, most of those are "antonyms" although it's difficult to classify "commander toady"
  16. Whilst I think this is a great idea. I am somewhat skeptical that the back-end issues reporting/analysis is best written in LabVIEW (PHP/Javascript and an Apache server with MySQL would be my first choice). Perhaps I live on the wrong forum However. If you need any help with DB stuff...... I'm in
  17. On the surface (from your description). it is a "many-to-one". So that really means Queue. My generic approach is to have multiple autonomous servers/processes/modules handling their own acquisition and placing data (after computation if necessary but sometimes to another module for crunching) onto a single "logging" queue. This "logging Queue (running also as a separate module/process) then saves data to a SQLite DB whenever there is something on the Queue. The DB can be queried in any which way you choose (between times, for a channel etc). Of course. Throughput is a consideration (how fast are you logging?) in which case the DB could be TDMS, however, it becomes much more difficult to extract the info once saved and you end up trading off resolution for convenience..
  18. You are loading the ENTIRE FILE then trying to transmit (all 600MB presumably) in one go. To remove the error you need to send a maximum of 1500 bytes of the file A BIT AT A TIME by reading the first 1500 bytes, write that to the UDP port then reading the next 1500 bytes and write that .... and so on - a method of which I demonstrated to you in the image posted for bluetooth earlier (the code for which you can download from the Code Repository). Short of writing it for you (and it is not my home-work), there is not a lot else I can tell you. Perhaps someone else can explain it better than I
  19. It means a piece. A bit. A portion.......of the file.
  20. Yes. This is a similar VI that it used in the "OPP Push File" in the LAVA CR. It opens a file and reads "chunks" that it then sends (in this case) via bluetooth. You need to do the same except with UDP rather than bluetooth..
  21. Just read 65k bytes (or 1500 bytes is better for the reasons Rolf outlined) from the file at a time and write it using the UDP write. However. As GregR stated. It is not guaranteed to be recieved in the correct order, or indeed a chunk to be received at all! So that's not a lot of good for files, but usually acceptable for audio or video where a lost frame or two doesn't matter too much. The easy solution, as others are saying, is to use TCPIP where all the ordering and receiving reassembly is handled for you (as well as other things UDP doesn't do out-of-the-box such as congestion control, re-transmission of lost packets etc). If you really must use UDP then you will have to code all that yourself (unless missing or incorrectly ordered file chunks is acceptable-for TDMS it wouldn't be). There are a few examples of reliable UDP (RDP). Noteably UDT and NORM. I am not aware of anyone doing this in Labview however, since it is just easier to use TCPIP.
  22. No. I'm saying the payload cannot exceed 65k. If you can send 1 payload (of 65k) every 10ms then that would be 6.5 MB/sec
  23. Under windows, the default udp datagram size is 1500 bytes (i.e the default MTU). The maximum it can be is 65535 bytes (you can set it to more than this, but it will still be limited to 65k ). This is because the UDP header field is a U16 and cannot represent more than 65k. UDP, User Datagram Protocol
  24. Indeed. A good example. The "never use globals, they cause race conditions" is a one line, easily conveyable axiom, that you should adhere to, until you can put forward convincing arguements for using them and demonstrate that you understand the implications/pros/cons. The "no bigger than one screen" is the same. The only major difference (for me) is that I have not come accross a single scenario where it cannot be achieved or, even, where it is not an improvement. Show me an example where you think it is better to have a sprawling diagram and I'll attempt to "improve" it. Sitting on the fence just hurts your arse "Single screen coding" are words you put in my mouth. I have already intimated that I "aspire" to 1/4 (of my) screen for diagrams. That should still fit on yours even at 640x480 Not really a code smell. I iteratively develop. So I design in a very much top-down manner but develop bottom-up. As each increasingly higher layer gets laid down, so natural re-use appears and gets encapsulated (within the framework of the design). Each bottom segment goes through a few iterations before it is "worthy" of becoming a module. That happens module-by-module, layer-by-layer. Eventually you get to the UI and all hell breaks loose. I came to the conclusion quite a while ago that our brains are wired differently (not in a bad way, just different). For me it is just breaking the code into bite-sized chunks that I can paw over. I find it much easier to analyse a small VI and only have to remember its terminals when thinking about interactions with the next. It is the same as only having to remember the API interface, not the details of the insides of the API (so yes, encapsulation). To me a diagram is more akin to a UML model than code and the sub-vis are layers within the model. So (learning from Crelf) What is a "typical" LabVIEW user? I think a LabVIEW starter thinks in this way. But by (initially) requiring less than 1 screens worth of spaghetti, they eventually start to see the patterns emerging that experienced coders see in the minds eye (this I think is the beauty of LabVIEW). So yes. The sub-VIs may not be encapsulating correctly or "as the book says", but from module-to-module, project to project. They soon start realising that they have done that before or that "the VI I made last week almost did that". After a while they have thier own little set of "utils" that appear in every project they work on. If you allow them to just throw code at several screens worth. That never happens (couldn't resist). Like I said. Bite-sized chunks and the modularity/re-use drops out. Good code is code that works, flawlessly. Thats all the customer is interested in. It doesn't matter if it is one digram that fits on a whole roomfull of screens. I am proffering that more than one screen is shoddy programming. Why? Because I have to run the whole screen in debug to see one bit of it (and assume I don't know which bit I want to see yet). If you have 4 while loops on one screen (that I have to keep scrolling around or it keeps jumping around to see where it is executing) it makes it so much harder to debug. Compare that, say, to 4 icons in the middle of the screen. Instant 4 breakpoints. Instant 4 probes (i.e their front panels) and I don't need to go chasing dots around with the scroll bars. Plenty of room for notes and comments and (if you have sanely labelled, used distinguishing icons and a put in a tiny bit of thought into the design) I can identify identical functions at a glance and have a pretty good guess which sub-VI I'll probably be looking into first. I can also run them separately, two at a time or any combination of the 4. I can wait until the FP controls have some data in them that I'm interested in. Stop it. And run it over and over again whilst looking inside. Oh. And it looks better
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.