Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by smithd

  1. I don't think thats possible. If I remember ecat correctly, in your scenario one of the devices that is forwarding the data packet has been removed, so the packet never returns to the host (http://www.ni.com/white-paper/7299/en/). You'd have to probably rescan modules to update the configuration (so it knows the last device is gone) and then change mode back to active. Supposedly beckhoff claims to have a 'hot connect' feature but I don't think NI has that.
  2. For windows you wish to hide, I believe you also have to disable allow users to minimize window. That may not be the case if the window itself is set to hidden.
  3. I can't speak for him, but the concept of queued actors has been around for a long, long time. The two c# actor-type frameworks I'm aware of are: http://getakka.net/ Which is based off the java framework of the same name (about a decade old now), which in turn was inspired by erlang (30 years old). Akka seems to be extremely popular, and their website has an obscene amount of documentation. https://dotnet.github.io/orleans/ Is slightly unique in that its intended for distributed systems (their claim to fame is that various xbox game services like halo 4 and 5 use it). Similar to this is http://proto.actor/docs/what is protoactor but this protoactor project seems to have the advantage as far as minimizing dependencies...but its also much much newer and not backed by microsoft.
  4. I agree, except thinking about it made me realize thats a pretty common need if you're interfacing with another device. Instead of "bad parameter" its "you connected to the device successfully, but it says that you gave it a bad parameter" or "you connected to the device successfully but it had an internal error processing your request", etc. There will always be some protocol or application specific errors to be sure, but having a generic family of "this standard error happened, but it didn't happen here it happened on the remote system" makes sense to me. And of course I realized thats almost exactly what http implements: http://www.restapitutorial.com/httpstatuscodes.html . 400 series is "you broke" and 500 series is "I broke" but they are all common enough errors. Except for 418 I'm a teapot.
  5. There was a fairly long period where it wasn't managed, people just went onto the internal page and edited it. Or at least thats what I did. To answer the question of where to keep them, we kept them with each project (included in the package build) and a completely separate range for each project as mentioned. 50 is an arbitrary range but seemed to work well for full projects and I selected 25 for smaller ones (like plugins). The closest I ever got to 50 was probably modbus, and that was only because (just as darren did above) I translated modbus errors which bloated the range a bit. Once set, it was rare to change the generated code. This is what I've been doing more recently. 80% of errors fall into 10 codes with maybe some extra metadata. So I made myself a little VI (basically this or this) that does the <append> tag and merges errors if needed, and lets you either use call chain or hard-code a source, and then I twirl through the explain error dialog until I find an appropriate error. This seems to be in line with what those linux guys do. EAGAIN means try again, ENOBUFS means "I'm sorry dave". Maybe I'm an outlier, but I don't see a ton of added value to the error code past the boolean. As above, linux uses some codes like EAGAIN to signal conditions which are arguably not errors, but labview's easy multiple return values means I'm 0% likely to encode "try again" as an error. So while I check for errors a lot (and cut off error wires at the source where possible), I really don't care about what error it is except to log it, and...lets be honest, error 56 works absolutely just as well to indicate a timeout as -1073807339 does. And while people may hate me, error code 1 seems like a perfectly sufficient way to indicate an argument error provided you append something human readable like "please enter a name into the field, dummy, empty string doesn't work".
  6. I have a control system that streams lots of data across a local network. If I configure the operator's computer so that they can also log onto the organization network as well (via firewalls, etc) the packet sniffing and a/v software drives CPU usage from 10% up to 50%
  7. Considering how long thats been a security concern I'm surprised there isn't something you can buy which adapts a random usb safely to your system. Like a raspberry pi that reformats every time it powers on and automatically exposes any mounted USB as a network share, or something along those lines (or as if its a DVD drive as described above). Or maybe there is such a device and my 10 second google search didn't find it :/ Am I correct in saying that there isn't a similar concern with (micro)sd cards? Or do they also have the potential for firmware hijacking?
  8. It sounds like your first comments are on organization which I get. When it comes to developing our current common library + mid-level dependencies I found that I very often would add things and forget to set up the palette correctly, so I kind of got use to plumbing the depths of vi.lib to find the files I needed...but yeah, not ideal for someone approaching it for the first time. Maintenance I'm less concerned about. I think submodules are a bit different than subtrees here, but with subtrees you have to tell git what it links to, so its actually possible for the code owner or project lead to have their git instance "know" about the subtrees while for everyone else they just show up as sections of the same repo. Then if something needs to be merged to the main repo its a bit more controlled -- you sort of have to know what you're doing. The other pros I can see are: Its a lot easier to do continuous integration work when you don't have to deal with vipm and instead everything is all right there in one neat package ready to build it lets you work with different versions of the common library more easily, for example still being able to deploy dependencies of v1.1 while working on v1.2
  9. I don't know about that. To pick some examples from the main labview forum: https://lavag.org/topic/20126-aligning-two-waveforms/ I can't imagine that this would become obsolete. Sure some of the nodes might change (at NI week they showed off how they are distinguishing between two use cases polymorphs currently cover, so maybe we see "feature detection{2d dbl, peak}" rather than just "peak detection.vi"), but i think the valuable part is the discussion here. https://lavag.org/topic/20151-how-to-draw-circles-and-lines-in-intensity-graph/ Yes this will obviously become obsolete. But, its also in the wrong forum anyway, arguably, as its a user interface question. Lava doesn't have anywhere near enough volume to matter. https://lavag.org/topic/20169-storingimporting-daqmx-task-configurations/ Will probably become obsolete as well, but ideally in favor of an obvious built-in solution to the problem, rather than a hack So I mean sure there will be some obsolete stuff during the transition but I highly doubt that will cause many problems With your past comments on .net I'm assuming you meant this to be insulting, but I honestly think this is a better name. For one thing it rolls off the tongue better than labview nxg, and more importantly it makes it sound like labview is trying to align itself to be more like 'a real programming language' which is nice.
  10. Interesting, why teststand? I would assume nxg would be added as an additional step type (ie lv, nxg, activex, .net, ...). Or did I miss some announcement? Edit: I see http://forums.ni.com/t5/NI-TestStand/Announcing-the-TestStand-2017-Beta-Program/td-p/3634785 The UI has also been updated to use NXG tools
  11. Sounds basically right, any file that was changed by two people simultaneously will need some kind of merge. In git its the responsibility of the person currently trying to commit. First they must pull the branch they want to commit to into their own to make sure they have a matching tree. That is, if master has commit 1,2,3 and your local copy only has 1,2 and then you committed 4, then you have to pull commit 3 into your repository so the trees match.If there are no files changed in 3 that you changed in 4 the merge happens seamlessly and then you commit your changes to the centralized repository so now you and the central repo have 1,2,3,4. If 3 changes file A and 4 changes file A then there is a conflict and you (developer of 4) must resolve it before pushing 1,2,3,4 and potentially 5 to the server. Its definitely a more complicated series of steps than you would usually have to deal with in SVN because of the locking, but its similar in the conflict scenario. Its also the way all merging works, so once you know how to do it between your copy and a master repository, you can do it with branches on your own machine or with others' forks or whatever without much if any additional learning. The other thing I've been looking in to lately is git subtrees/submodules. They let you embed a repo in another one, so for example if you have a common library in your organization you could (instead of distributing as a package) distribute just as source embedded in the repositories it depends on. Git keeps track of what is what and if you make a change in the subtree it can be pushed back up to the main repo. That is if I have shared library A and consuming projects B, I could make bug fixes for A and push them back up to A without ever leaving the project I'm currently working on B. I haven't used it yet but it seems like an interesting tool for a category of shared tools. Obviously if the shared code is super stable there isn't much value, but... Has any git user tried this out, or does Hg/svn have similar functionality?
  12. I see, that makes sense. Yeah I wouldn't necessarily want to agree to that process either unless anyone here is aware of a code review tool that works nicely with labview. Having an integrated issue tracker can be nice though, as your commits can directly link to features. Well that assumes they left it locked on purpose which is the annoying problem i face especially with folks swapping back and forth between git and svn or p4. They go on vacation and everything is locked, as an example. As for the communication argument I would never argue that the lockless approach makes that easier but I do see it as an easily solved problem. I'd prefer a middle ground like a 'git flag' or something on a section of code, but i have no idea how it would work..
  13. usually when i hit submit and it fails it at least saves the latest draft, and refreshing the page goes right back to the draft. this time that failed. its the same general behavior you see.
  14. Well I had a reply but the website did that thing where hitting submit fails and I lost it. The short version is: -I know they adjust the difficulty pretty regularly, and I know they did so not long after I took my CLD, so maybe thats the difference.
  15. Below is not really a discussion of the pros and cons. To get a detailed and specific list, just google it. If there is a dead horse thats been beaten on the internet, its "git vs svn". What I'm confused about is the "extra overhead" comment. You've said it twice but haven't provided any example of what you mean by this. There is a learning curve, absolutely, as with any tool, but I don't really see much if any additional overhead. If you want to use git just like SVN with a central server there doesn't seem to be anything stopping you and the workflow is similar (you have to press "push" before every commit and "pull" before you begin work, but...). This is primarily how I've used it, with the additional features of local commits and branching (I like to commit several times per day even if the code is still broken). Because the projects are small, I've only ever had a few conflicts over the course of several years. I think this is the situation most of us are in, less than 5 people working on a given project. Bigger projects with labview would have slightly more challenges with regards to coordination because of the lack of locking, but this is solvable through two things you should be doing anyway: communicating with your coworkers about what they are doing, and breaking things down into smaller projects. The best example I have of this is https://github.com/LabvieW-DCAF/ . There are 51 repositories by my count and while some are non-code (documentation, build scripts, etc) the development granularity keeps things easier to manage regardless of what source control you use. And thats actually the point I wanted to make...I started out using SVN, I used perforce and currently use it for one project, and I've used git whenever possible since I switched over to it. Fundamentally the procedures involved in any of them are pretty similar (as shaun said, 'commit and revert'), but I keep returning to git because the 'killer feature' is serverless and non-locking usage. When I work on a feature the entire codebase belongs to me
  16. Which is funny because when I took them, I thought the CLD was much harder than the CLA.
  17. I'm guessing its deliberately not replacing vipm for current lv. You can download and install the builder tools for current labview but it seems to exclude the ability to select by labview version, symbolic paths, etc. If you peek at the available packages you'll note that there seems to be a separate package for each supported labview version. Surely this is not a use case they overlooked, so the only conclusion I can draw is that they excluded it on purpose.
  18. Git, specifically the open source Gitea/Gogs with sourcetree as my client.
  19. Since it is an edit time construct, I would assume you can use it in an lvlibp but you can't expose a function from an lvlibp which adapts to type. FPGA works too, and I've got a few things in an internal library which are basically typed versions of the same code. I suppose there might be a way to nicely oopify it but I'd rather use these.
  20. in the talk they mentioned that while the feature existed they replaced the xnode back-end with something else unspecified (maybe the nodes fpga uses?)
  21. As for the structure, you may wish to watch the JeffK+Mercer presentation at NI week you can get here My understanding: basically it is a disable structure where instead of manually enabling/disabling, the compiler will run through all cases and enable the first case which doesn't cause a compiler error. When used in conjunction with a vim you can do fancy things. For example if you wanted to make 1 node that did "concatenate stuff" you could have two cases, 1 assuming the inputs are strings and 2 assuming they are scalars or arrays. If the type passed in is not a string, that case1 will cause a compiler error and it will go on to case2 with the more flexible build array, which will compile. In the NI week presentation it sounded like it was mostly solid but too early to be comfortable throwing it out to the masses yet.
  22. You can have multiple windows with different tabs, and you can have a split-screen code/ui on one tab, so its certainly still possible to follow most of the same workflow. Like many people I like the tabbed interface and think its a significant improvement over the endless cascade of variable-size variable-position windows. How many times have you opened up someone else's code only to have it pop up off screen because they had more/bigger monitors than you? I've made a script to fix this on a folder of code, it happens so often. Something else that may interest you is the data capture features, at least from a testing perspective: http://www.ni.com/documentation/en/labview/1.0/data/capturing-data/ and http://www.ni.com/documentation/en/tech-preview/1.0/data/using-analysis-panels-to-process-data/ When I actually get down to doing some math (eg analyze this waveform) I often end up tweaking things. The idea of being able to capture some representative data set, apply it to a math function, capture the output, tweak the math, capture the new output, and compare the results seems like a nice tool to have. This is an attitude that i remember hearing all the time and still interests me. I mean, an arduino, beaglepone, or raspberry pi is definitely cheap, but what can it actually do that you would otherwise use a cRIO for, or that would be generally useful in your work? I understand the hobby angle, but...what on earth did you use them for in your actual job?
  23. I was just thinking about this too. I believe at present this is right, but it seems like it would make sense to separate the logical linkages from the decorations. I mean the 'code' is just the nodes and wires. We want to describe the appearance of those too but its completely irrelevant to the 'code'. I'm sure this is still an eventual goal, but the stuff they mentioned with regards to editing the html manually was all relatively specific and advanced, like dropping in an embedded youtube or maps element. I would assume that the next first step would be a drop down "script control", etc. etc.. I suppose they could use this as a crutch to avoid developing features, but I'd much rather be able to fix an issue by editing the html than relying on NI to fix a bug and push a patch.
  24. Sharepoint has a web service API: https://msdn.microsoft.com/EN-US/library/office/dn450841.aspx Never used it but that seems to be the answer
  25. This is a pretty famous problem within large projects, as with netscape, word, mozilla, etc: https://news.ycombinator.com/item?id=2139176 or https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/ I'm not convinced that NI hasn't done the same to themselves by neglecting their existing customers in favor of...whoever ngx is supposed to benefit...but I don't think you can blame the language, especially one as large as c# (which includes the core language, the base libraries on top of it, and then the ui frameworks asp/wpf/winforms). I'm sure there are similar anecdotes of people who moved from c to c++ and it was 'too slow', or people who never got their product to market because they spent too much time trying to understand their c++ compiler errors. Fair enough -- one thing I wondered is why they havent gotten around to cross compilation now that the compiler is llvm. Clearly they know how to cross compile -- they do pharlap, linux-arm, linux x64, and VxWorks without too much of a problem...but no obvious cross-compile for desktops.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.