Jump to content

ShaunR

Members
  • Posts

    4,856
  • Joined

  • Days Won

    293

Everything posted by ShaunR

  1. Completely dissimilar machines. One has an 450GB SSD (laptop), one has a 500GB seagate (desktop), one has a 750GB Western Digital (Desktop) one has a 120GB WesternDigital (laptop). Mixtures of Win7/LabVIEW x32 and x64 and I7, I5 and Dual Core Pentiums (dissimilar hardware). I have a suspicion it is a Windows update or maybe virus scanner update (since all LabVIEW installations are over 8 months old)
  2. Recently (I would say in the last month or so) I've been getting this strange error from the IDE when opening projects and vis from the labview "Getting Started" screen (from any drive including C:\). It doesn't happen every time (maybe 1 in 20) and after pressing "retry" it will load the file. It happens in all LV versions and on different machines. I know there were murmurs a while back about seeing this in LV 2010 (with no resolution from what I could tell) but like I said. One of the machines has a 2 year old copy of LV2009 on it (and that's been rock-solid until now). Anyone else experiencing this or knows why this happens?
  3. Your signal is 20ms (50Hz). That's mains leakage (either in your source or in the USB supply). Change the source to a battery and see if it goes away. If it doesn't, send back the unit and get a replacement.
  4. Oh, I don't know. Quite possible with Labview distributions I was talking more in context with IEC61850 rather than the OPs requirement (but I can see that I wasn't clear on that). It seems, from a casual perusal ,that it (IEC61850) is targeting similar requirements to Ethercat (the 4ms deterministic messaging - Fast Transfer of events requirement for example). In this scenario, I can see UDP multicast being very useful.and much easier for a device to "register" for messaging or "listening" for status events rather than the controller opening specific connections (and the overheads in the controller associated with copying messages to multiple clients).
  5. There is a VI in the "\vi.lib\utilities\lvdir.llb" called "Get Base Labview Directory.vi". With it you can get the common/user locations that labview uses (like "C:\Users\<user_name>\Documents\LabVIEW Data"
  6. Indeed. However that "tried and proven" doesn't extend to multicast and a control/monitoring system could definitely take advantage of that. That said. This type of requirement is what Ethercat is for.
  7. From what I can tell, most of those are "antonyms" although it's difficult to classify "commander toady"
  8. Whilst I think this is a great idea. I am somewhat skeptical that the back-end issues reporting/analysis is best written in LabVIEW (PHP/Javascript and an Apache server with MySQL would be my first choice). Perhaps I live on the wrong forum However. If you need any help with DB stuff...... I'm in
  9. On the surface (from your description). it is a "many-to-one". So that really means Queue. My generic approach is to have multiple autonomous servers/processes/modules handling their own acquisition and placing data (after computation if necessary but sometimes to another module for crunching) onto a single "logging" queue. This "logging Queue (running also as a separate module/process) then saves data to a SQLite DB whenever there is something on the Queue. The DB can be queried in any which way you choose (between times, for a channel etc). Of course. Throughput is a consideration (how fast are you logging?) in which case the DB could be TDMS, however, it becomes much more difficult to extract the info once saved and you end up trading off resolution for convenience..
  10. You are loading the ENTIRE FILE then trying to transmit (all 600MB presumably) in one go. To remove the error you need to send a maximum of 1500 bytes of the file A BIT AT A TIME by reading the first 1500 bytes, write that to the UDP port then reading the next 1500 bytes and write that .... and so on - a method of which I demonstrated to you in the image posted for bluetooth earlier (the code for which you can download from the Code Repository). Short of writing it for you (and it is not my home-work), there is not a lot else I can tell you. Perhaps someone else can explain it better than I
  11. Yes. This is a similar VI that it used in the "OPP Push File" in the LAVA CR. It opens a file and reads "chunks" that it then sends (in this case) via bluetooth. You need to do the same except with UDP rather than bluetooth..
  12. Just read 65k bytes (or 1500 bytes is better for the reasons Rolf outlined) from the file at a time and write it using the UDP write. However. As GregR stated. It is not guaranteed to be recieved in the correct order, or indeed a chunk to be received at all! So that's not a lot of good for files, but usually acceptable for audio or video where a lost frame or two doesn't matter too much. The easy solution, as others are saying, is to use TCPIP where all the ordering and receiving reassembly is handled for you (as well as other things UDP doesn't do out-of-the-box such as congestion control, re-transmission of lost packets etc). If you really must use UDP then you will have to code all that yourself (unless missing or incorrectly ordered file chunks is acceptable-for TDMS it wouldn't be). There are a few examples of reliable UDP (RDP). Noteably UDT and NORM. I am not aware of anyone doing this in Labview however, since it is just easier to use TCPIP.
  13. No. I'm saying the payload cannot exceed 65k. If you can send 1 payload (of 65k) every 10ms then that would be 6.5 MB/sec
  14. Under windows, the default udp datagram size is 1500 bytes (i.e the default MTU). The maximum it can be is 65535 bytes (you can set it to more than this, but it will still be limited to 65k ). This is because the UDP header field is a U16 and cannot represent more than 65k. UDP, User Datagram Protocol
  15. Indeed. A good example. The "never use globals, they cause race conditions" is a one line, easily conveyable axiom, that you should adhere to, until you can put forward convincing arguements for using them and demonstrate that you understand the implications/pros/cons. The "no bigger than one screen" is the same. The only major difference (for me) is that I have not come accross a single scenario where it cannot be achieved or, even, where it is not an improvement. Show me an example where you think it is better to have a sprawling diagram and I'll attempt to "improve" it. Sitting on the fence just hurts your arse "Single screen coding" are words you put in my mouth. I have already intimated that I "aspire" to 1/4 (of my) screen for diagrams. That should still fit on yours even at 640x480 Not really a code smell. I iteratively develop. So I design in a very much top-down manner but develop bottom-up. As each increasingly higher layer gets laid down, so natural re-use appears and gets encapsulated (within the framework of the design). Each bottom segment goes through a few iterations before it is "worthy" of becoming a module. That happens module-by-module, layer-by-layer. Eventually you get to the UI and all hell breaks loose. I came to the conclusion quite a while ago that our brains are wired differently (not in a bad way, just different). For me it is just breaking the code into bite-sized chunks that I can paw over. I find it much easier to analyse a small VI and only have to remember its terminals when thinking about interactions with the next. It is the same as only having to remember the API interface, not the details of the insides of the API (so yes, encapsulation). To me a diagram is more akin to a UML model than code and the sub-vis are layers within the model. So (learning from Crelf) What is a "typical" LabVIEW user? I think a LabVIEW starter thinks in this way. But by (initially) requiring less than 1 screens worth of spaghetti, they eventually start to see the patterns emerging that experienced coders see in the minds eye (this I think is the beauty of LabVIEW). So yes. The sub-VIs may not be encapsulating correctly or "as the book says", but from module-to-module, project to project. They soon start realising that they have done that before or that "the VI I made last week almost did that". After a while they have thier own little set of "utils" that appear in every project they work on. If you allow them to just throw code at several screens worth. That never happens (couldn't resist). Like I said. Bite-sized chunks and the modularity/re-use drops out. Good code is code that works, flawlessly. Thats all the customer is interested in. It doesn't matter if it is one digram that fits on a whole roomfull of screens. I am proffering that more than one screen is shoddy programming. Why? Because I have to run the whole screen in debug to see one bit of it (and assume I don't know which bit I want to see yet). If you have 4 while loops on one screen (that I have to keep scrolling around or it keeps jumping around to see where it is executing) it makes it so much harder to debug. Compare that, say, to 4 icons in the middle of the screen. Instant 4 breakpoints. Instant 4 probes (i.e their front panels) and I don't need to go chasing dots around with the scroll bars. Plenty of room for notes and comments and (if you have sanely labelled, used distinguishing icons and a put in a tiny bit of thought into the design) I can identify identical functions at a glance and have a pretty good guess which sub-VI I'll probably be looking into first. I can also run them separately, two at a time or any combination of the 4. I can wait until the FP controls have some data in them that I'm interested in. Stop it. And run it over and over again whilst looking inside. Oh. And it looks better
  16. Most of the overhead is the front panel control (of course). I get about 1.5ns without it and 40ns with it (still using the +1). VI overhead is vastly over rated for this sort of stuff. Shove it in a sub-VI and you can get it down to about 0,3 ns
  17. That's a bit arse-about-face. It promotes readabilty and forces you to think about modularity. There is no excuse for scrolling all over the place, it is a symptom of poorly thought out modularity and hierarchy. It has become more "acceptable" because it is so difficult to encapsulate queues and notifiers - hence my preference for named queues with a string input (it enables very simple encapsulation of them).
  18. Forums are a much better format for in-progress development IMHO. You are also far more likely to get contributions to bring it to fruition. You can decide on a license that suits you and is much clearer to people than the NI site. Additionally, once mature and if you decide to, it can go as part of the Lava tools network, a package under the jki thingy, or just remain in the CR with no major headaches. Documentation isn't that rigorous (readme and version history if I recall). Perhaps start it off in the uncertified. Nice work.
  19. Hmmm. There is something weird with the vi that cannot be shrunk to less than 50px. If you create a new one you can shrink right down to the toolbar. Haven't figured out why that one is stuck.
  20. Another Win API example...... (press a key to stop it)
  21. According to the text in the code....yes
  22. The enable terminal will prevent the FEEDBACK VALUE (FV) being propegated not prevent the action of the +1 as it does in your top scenario. So,after initialisation (FV=0) you get an output of 1 since 0+1 =1. So the feedback node is behaving as expected, however your shift register example is not equivelent. Below is how the feedback node behaves in your example.(I agree with Todd-the enable terminal is not a good usage for this scenario).
  23. มีดื่มสำหรับผม. ชนแก้ว ฝรั่งติงต๊อง
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.